name
stringlengths 5
6
| title
stringlengths 8
144
| abstract
stringlengths 0
2.68k
| fulltext
stringlengths 1.78k
95k
| keywords
stringlengths 22
532
|
---|---|---|---|---|
316485 | Homotopy of Non-Modular Partitions and the Whitehouse Module. | We present a class of subposets of the partition lattice n with the following property: The order complex is homotopy equivalent to the order complex of n 1, and the Sn-module structure of the homology coincides with a recently discovered lifting of the Sn 1-action on the homology of n 1. This is the Whitehouse representation on Robinsons space of fully-grown trees, and has also appeared in work of Getzler and Kapranov, Mathieu, Hanlon and Stanley, and Babson et al.One example is the subposet Pnn 1 of the lattice of set partitions n, obtained by removing all elements with a unique nontrivial block. More generally, for 2 k n the subposet of the partition lattice obtained by removing all elements with a unique nontrivial block of size equal to k, and let Pnk = \bigcapi = 2k Qni. We show that Pnk is Cohen-Macaulay, and that Pnk and Qnk are both homotopy equivalent to a wedge of spheres of dimension (n 4), with Betti number (n 1) \!{n - k\over k}. The posets Qnk are neither shellable nor Cohen-Macaulay. We show that the Sn-module structure of the homology generalises the Whitehouse module in a simple way.We also present a short proof of the well-known result that rank-selection in a poset preserves the Cohen-Macaulay property. | Introduction
In this paper we consider subposets of the partition lattice \Pi n obtained by
removing various modular elements. Recall that \Pi n is the lattice of set partitions of
an n-element set, ordered by refinement. We say a block of a partition is nontrivial
if it consists of more than one element. The modular elements of \Pi n are precisely
those partitions with a unique nontrivial block (for this and other basic definitions
see [St3]). For a bounded poset P we denote by -
P the proper part of
the poset P with the greatest element - 1 and the least element - 0 removed. We
for the order complex of P ; the simplices of \Delta(P ) are the chains of -
By the ith (reduced) homology ~
of P we mean the ith (reduced) simplicial
homology of its order complex \Delta(P homology in this paper is taken with
integer coefficients except for representation theoretic discussions, in which case we
take coefficients over the complex field. All posets are bounded unless explicitly
stated otherwise.
1991 Mathematics Subject Classification. Primary 05E25, 06A08, 06A09; Secondary 20C30.
Key words and phrases. poset, homology, homotopy, set partitions, group representation.
Research supported by National Science Foundation Grant No. DMS9400875.
Typeset by A M S-T E X
For
n to be the subposet of \Pi n obtained by removing all
modular elements whose unique nontrivial block has size 2 - i - k; and define Q
to be the subposet of \Pi n obtained by removing all modular elements whose unique
nontrivial block has size k: In particular, P
n consists of all partitions in \Pi n with
at least two nontrivial blocks, together with the greatest and least elements. It is
not hard to see that the posets P k
are ranked, of rank (n \Gamma 2); one less than the
rank of \Pi n . On the other hand the subposets Q k
n have full rank (n \Gamma 1):
The figures below show the (order complexes of) the posets P 3
4 and Q 3
4 is not Cohen-Macaulay. Note that the 0-dimensional order complex of P 3
4 and
the 1-dimensional order complex of Q 3
4 both have the same homotopy type, and
hence have the same homology.
Figure
1: The poset b
Figure
2: The poset b
describe briefly the motivation for this work. In [Su] some general techniques
were developed for computing the homology representation of a poset for a finite
group of automorphisms, and applied to Cohen-Macaulay subposets of the partition
lattice. Note that the subposets P k
are invariant under the action of the
In particular the Lefschetz module (i.e, the alternating sum
(by degree) of the reduced homology modules ), Alt(P k
n ); is a virtual Sn-module.
By applying [Su, Theorem 1.10 and Remark 1.10.1] to the subposets P k
; we can
show that as (virtual) Sn-modules, (\Gamma1) n\Gamma4 Alt(P k
are both
isomorphic to
(Here the up arrow indicates induction.) For the representation given by
(0.1) is the complement of ~
in the induction of ~
This is precisely the representation of Sn on Robinson's space of fully grown trees,
as computed by Sarah Whitehouse (see [R], [RW], [W], [Ha2]). The restriction of
this representation to Sn\Gamma1 is ~
to tensoring with the sign, this is also
the lifting of the Sn\Gamma1 action on the multilinear component of the free Lie algebra
generators up to Sn ; described in [GK].
For arbitrary k it is not hard to see that (0.1) is in fact a true representation
of Sn : In view of these facts, one is naturally led to ask whether the homology of
the subposets P k
n is concentrated in a unique dimension. We answer this
question in the affirmative, showing that both posets have nonvanishing (reduced)
homology only in dimension
We show that the order complexes of the posets P k
are homotopy equiva-
lent, with free integral homology, and that P k
n is Cohen-Macaulay over the integers.
(It follows that the pure posets Q k
n are not Cohen-Macaulay.) We conjecture that
the order complex of P k
(and hence of Q k
n ) is homotopy equivalent to a wedge of
4)-spheres, and verify this conjecture for
Our main tool is Quillen's fibre lemma (see [Q], [Bj3])). In Section 1 we investigate
the effect on homology of deleting an antichain from a poset (Theorem
1.1) and generalise this to an analogue for simplicial complexes (Theorem 1.5).
As a consequence we obtain a simple proof, using only the exact homology sequence
of a pair, of the well-known result that rank-selection in a poset preserves
the Cohen-Macaulay property. In Section 2 we show that the subposets c
n and
c
are homotopy equivalent (in fact Sn-homotopy equivalent). The representation
theoretic aspects are addressed in Section 3, where we give a simple formula (equa-
tion (3.1)) describing the Sn-module stucture of the homology of Q k
(and hence of
terms of the homology of the partition lattices \Pi k and \Pi n : We conclude in
Section 4 with a brief discussion of possible generalisations of this work.
1. Deleting an antichain from a Cohen-Macaulay poset
Let P be any poset, and let A be an antichain in P: For our first result we
use the exact sequence of a pair to obtain information on the homology of the
subposet PnA of P obtained by removing all elements of A; in the case when P is
Cohen-Macaulay.
The hypotheses in the theorem below may be relaxed somewhat by considering
the more general case of simplicial complexes; see Theorem 1.5 at the end of this
section.
Theorem 1.1. Let P be a Cohen-Macaulay poset of rank r over the integers. Let
A be an antichain in -
P such that for every a 2 A; the homology of at least one
of the intervals ( - 0; a); (a; - 1) is free. Let PnA denote the subposet of P obtained by
deleting the elements of A: Then the reduced integral homology of PnA vanishes in
all dimensions except possibly r \Gamma 2 and r \Gamma 3: If in addition the homology of P and
all intervals ( - 0; a); (a; - 1); a 2 A; is free, then so is the homology of PnA:
Proof Consider the long exact homology sequence of the pair (\Delta(P ); \Delta(P nA))
(see [Mu1]). Since P is Cohen-Macaulay, the reduced homology of P vanishes for
degrees not equal to r \Gamma 2; and the long exact sequence reduces to the following two
sequences:
and, for
We must first compute the relative homology groups H i (\Delta(P ); \Delta(P nA)): Clearly
the ith quotient chain group C i (\Delta(P ))=C i (\Delta(P nA)) consists of classes of chains
going through at least one element of A: Since A is an antichain, each such chain
must go through exactly one element of A: Now consider the boundary ~
@ map of
this relative complex. By the preceding remarks it is clear that if
(representative of) a nonzero relative i-chain, where x
is the unique element of A in the chain, then
~
where as usual the hat denotes suppression of an element.
Hence the complex of relative chains is isomorphic to the direct sum of tensor
products (over the integers) of chain complexes
a2A
~
)\Omega ~
By hypothesis, in each summand of (1.3) at least one of the intervals has free
homology. Consequently, by the K-unneth theorem, the relative homology is given
by
a2A
~
P\Omega ~
Now use the fact that for the intervals ( - 0; a) and (a; - 1) in P; the reduced homology
vanishes except in the top dimension. Hence in the above sum, the right-hand side
vanishes unless 2: The
first conclusion now follows from (1:2):
For the second statement, observe that if the top homology of P is free, then
from the exact sequence (1.1) it is clear that the homology of PnA in degree r \Gamma 2
(being a subgroup of a free abelian group). The relative homology is free as
well, and hence PnA has free homology in both dimensions.
As a by-product of this general result, we obtain a simple proof of the fact
that rank-selection preserves the Cohen-Macaulay property, a theorem due inde-
pendently, and with different proofs, to Baclawski, Stanley and Munkres.
Corollary 1.2. ([Ba1, Theorem 6.4], [St1, Theorem 4.3], [Mu2, Corollary 6.6])
Let P be a Cohen-Macaulay poset over the integers, and let Q be a rank-selected
subposet of P: Then Q is Cohen-Macaulay over the integers.
Proof Let some subset of -
It suffices to consider the case
of removing one rank, so that A is an antichain. Then Q is ranked of rank
where r is the rank of P: Hence ~
H 0: Now use the preceding result.
The same argument applies to an open interval in Q; which either coincides with
the corresponding open interval of or else is obtained from it by deleting one
rank. Hence if Q is P minus one rank, then Q is Cohen-Macaulay.
If P is an arbitrary poset and A is an antichain of -
special case of a
well-known formula for the M-obius number -(P ) of P (see [Ba2, Lemma 4.6]) says
that
Noting that -(P ) is simply the reduced Euler characteristic of the order complex of
we have the following formula (which also
follows from the proof of Theorem 1.1):
Corollary 1.3. Let P and A be as in Theorem 1.1 (assume all homology is free).
Then
dim ~
dim ~
H r\Gamma2 (P
We return now to the partition lattice \Pi n : Recall that if - is an integer partition
of n; then a set partition x in \Pi n is said to be of type - if x has block sizes
n be the subposet obtained by deleting the
antichain consisting of all elements of type (k; 1 n\Gammak
For k - 3; the poset Q k
n is ranked of rank (n \Gamma 1): For let a 2 \Pi n have a unique
nontrivial block of size k; and suppose a covers x and is covered by y: Then all
blocks of x are singletons except possibly for two blocks whose union is the
k-block A of a: Assume first that B 1 has size less than or equal to k \Gamma 2: Since y
covers a; either y is a modular element with unique nontrivial block A[ fpg or else
y has two nontrivial blocks A and fp 1 here the p's are singletons of a: In either
case there is a non-modular element z in \Pi n in the interval (x; y) : in the first case
merge the block B 1 of x with the singleton fpg to form z: In the second case merge
the singletons p 1 and
Now suppose x is obtained from a by splitting the unique nontrivial block A into
the block B 1 and a singleton (Thus x is itself modular.) If y is modular with
nontrivial block A[ fpg; merge the singletons p and If y has a second nontrivial
block then merge the singletons p 1 and In each case this produces a
non-modular partition z in the interval (x; y):
Note that Q 2
n is the rank-selected subposet obtained by deleting the atoms.
For
n is not a lattice. The smallest interesting example is Q 3
complex is disconnected and one-dimensional, and is homotopy equivalent to a
wedge of two 0-spheres (see Figure 2 of the Introduction). In particular Q 3is not
Cohen-Macaulay. In the next section we shall see that this is true in general.
Finally we note that Theorem 1.1 gives the following fact, which will play a
crucial role in the next section.
Proposition 1.4. The reduced integral homology of Q k
n vanishes in all dimensions
different from
In the next section we shall show that the homology of Q k
n is concentrated in a
unique degree. It is not difficult to construct examples of a Cohen-Macaulay poset
P and an antichain A which show that PnA can have homology in both degrees.
We can relax the hypotheses of Theorem 1.1 by considering the appropriate
analogue for simplicial complexes. Recall that the link 'k(v) of a vertex v of a
simplicial complex \Delta is the subcomplex whose simplices are the faces F of \Delta such
that
F and F [ fvg is (a simplex) in \Delta:
Theorem 1.5. Let \Delta be a finite simplicial complex, and let A be a subset of the
vertices of \Delta such that every facet (i.e., maximal face) of \Delta has at most one vertex
in A. Assume that there is an integer d such that
(i) the ith reduced homology of \Delta vanishes for all degrees i 6= d; and
(ii) For every vertex a 2 A; the ith reduced homology of the link of a in \Delta vanishes
for all degrees i
Let \Delta 0 be the subcomplex of \Delta obtained by removing all faces having a vertex in
the set A: Then ~
addition \Delta and the
links 'k(a) have free integral homology for all a 2 A; then \Delta 0 also has free integral
homology.
Proof The following observations are sufficient, since the essential ideas are as in
the proof of Theorem 1.1. The key point now is that the relative chain complex
isomorphic to the direct sum, over a 2 A; of the chain complex of
the suspension of the link 'k(a) of a in \Delta:
Hence the relative homology is given by the formula
~
a2A
~
But by hypothesis, the link 'k(a) has zero homology in degrees j 6= That
is, the relative homology is zero for degrees 6= d: Now the conclusion follows exactly
as in Theorem 1.1.
In the particular case when \Delta is a pure d-dimensional Cohen-Macaulay simplicial
conditions (i) and (ii) of the above theorem are automatically satisfied.
The conclusion of Theorem 1.1 may thus be obtained by taking \Delta to be the order
complex of a Cohen-Macaulay poset of rank d 2: Note in particular that the
hypothesis, in Theorem 1.1, that for each a 2 A at least one of the intervals ( - 0; a)
or (a; - 1) have free homology, is not necessary for the first conclusion.
The full result of [Mu, Corollary 6.6] also follows from the above. In addition,
just as we obtained Corollary 1.2, we recover Stanley's result on subcomplexes
of completely balanced Cohen-Macaulay complexes (see [St1, Theorem 4.3]) from
Theorem 1.5. The details are identical to the above proof and the proof of Corollary
1.
2. A homotopy equivalence
We begin by stating a powerful theorem of Quillen, which we shall use repeatedly
throughout this paper. For a survey of the variations on this useful principle see
[Bj3].
Theorem 2.0. (Quillen's fibre lemma) [Q, Proposition 1.6] Let P and Q be bounded
posets and let
Q be an order-preserving map of posets. Assume that for
all a 2 -
Q; the fibre F a
ag is contractible. Then f induces a homotopy
equivalence of the order complexes \Delta(P ) and \Delta(Q): (The same conclusion
holds if the fibre F 0
ag is contractible for all a 2 -
Recall that P k
n is the subposet of \Pi n obtained by deleting all modular elements
of type
It follows from the remarks
about
that P k
n is also ranked, but of rank the atoms have been
deleted). The aim of this section is to show that the (n \Gamma 4)-dimensional complex
n ) and the (n \Gamma 3)-dimensional complex \Delta(Q k
have the same homology. In
fact the following stronger result holds.
Theorem 2.1. The order complexes of P k
are homotopy equivalent. More
generally, for any subset I of the integers the inclusion c
homotopy equivalence of the corresponding order complexes
Proof We shall only prove the first statement, since the second follows by the
identical argument.
Consider the inclusion map
fibre lemma we need
only show that the fibres F a
ag are contractible. This is clearly
true if a 2 P k
a is a modular element with a unique
nontrivial block B of size 1: For notational convenience assume a is
the partition (with in which the elements are the
singletons. Viewing a as a partition of distinguished
element consisting of the block B; the fibre F a is poset isomorphic to the poset
Rn\Gammai+1 (S) obtained from b
\Pi n\Gammai+1 by removing the set S consisting of all modular
elements whose unique nontrivial block contains the distinguished element B; and
is of cardinality s;
The fact that these posets are contractible follows from the next lemma:
Lemma 2.2. Let k - 2; and let S be the subset of modular elements of \Pi n of type
such that n is in the unique nontrivial block of every element
of S:
Let Rn (S) be the subposet of \Pi n obtained by removing all elements of S: Then
(the order complex of) Rn (S) is contractible.
Proof Let ff n denote the partition in \Pi n consisting of exactly two blocks, one of
which is the singleton block fng: Note that ff n 2 Rn (S): Define a
Rn (S) 7!
\Pi n by
Here - denotes the meet operation in the lattice \Pi n : Note that the effect of taking
the meet of x with ff n is to fix x if n is a singleton of x; or else to produce a new
partition x 0 where x 0 is obtained from x by splitting the block B containing n into
two blocks so that n is a singleton. Now observe that
(a) f is order-preserving;
(b) the image of f is contained in
Rn (S); for this it suffices to note that - 0 is not
in the image of f; and this is ensured by the fact that S contains all the atoms
whose unique nontrivial block contains n;
(c) f(x) - x for all x:
Conditions (b) and (c) together imply that the fibres of f are contractible. Hence,
by Quillen's fibre lemma again, f is a homotopy equivalence between
Rn (S) and
the image of f: But the image of f clearly consists of all partitions in \Pi n in which
n is a singleton, except for the least element of \Pi That is, the image of f is
poset-isomorphic to -
where the - 1 is provided by the two-block partition
Hence the image of f is contractible.
This completes the proof of Theorem 2.1.
Remark 2.2.1. The conclusion of Lemma 2.2 is valid for more general subsets
S of modular elements, as long as S contains all the modular elements of type
(i.e., atoms) and that n is in the nontrivial block of all elements of S:
The special case of Lemma 2.2 when S consists only of atoms follows from [Wal1,
Theorem 6.1]; here S is the set of complements of the two-block partition ff n in
which n is a singleton (for elaborations of this principle see the references in [Bj3]).
Theorem 2.3. Let 2 - k - 1: The reduced integral homology of the posets P k
and
vanishes except in dimension (n \Gamma 4): This holds
more generally for the posets
In particular for n - 4 and k - 3; the (pure) posets Q k
are not Cohen-Macaulay.
Proof From Theorem 2.1 it follows that the two posets have the same homology.
n has rank (n \Gamma 2); its order homology vanishes for all degrees greater than
On the other hand, Proposition 1.4 says that Q k
can have nonvanishing
homology only in degrees 4: The result follows.
As one more application of these arguments, we also obtain the following
Theorem 2.4. The poset
is homotopy-equivalent to b
Hence the order complexes of P
n and
have the homotopy type of a wedge
of (n \Gamma 2)! spheres of dimension (n \Gamma 4):
Proof Consider the map f :
\Pi n as defined in Lemma 2.2. The image of
this map consists of all partitions in b
\Pi n such that n is a singleton, except for the
two-block partition ff n of Lemma 2.2; it is therefore isomorphic to b
(with respect to the image!) are contractible by the same argument as in the proof
of Theorem 2.1. More precisely, we consider only fibres F a
ag
for a in the image of f: (Note that the fibre of the two-block partition ff n of Lemma
2.2 is empty and hence not contractible.) The result now follows by Lemma 2.2
and Quillen's fibre lemma.
The final statement follows from the well-known fact that the order complex of
the partition lattice \Pi n is shellable ([Bj1, Example 2.9]), and hence (see [Bj2, Theorem
1.3], [BW, Theorem 4.1]) has the homotopy type of a wedge of (n \Gamma 1)! spheres
of dimension (n \Gamma (see [St2] for the M-obius (Betti) number computation).
From Corollary 1.3 we now have
Corollary 2.5. For
n denote the common dimension of the
unique nonvanishing homology of the posets P k
n and
1g: Then
In order to investigate whether or not P k
n is Cohen-Macaulay, we need to look
at proper intervals in the poset. Note that the obvious analogue of Theorem 2.1 is
false for arbitrary intervals of P k
For example, in Q 5
6 the interval J
is homotopy equivalent to a wedge of six spheres S 2 (it coincides with the same
interval in \Pi 6 ), whereas in P 5
6 the interval has rank 3. It is not
hard to see that J has the homotopy type of a wedge of 7 spheres of dimension 1.
To obtain information on intervals ( - 0; y) in P k
we need the following generalisation
of Lemma 2.2.
Lemma 2.6. Let S be the subset of the modular partitions in \Pi n as in Lemma 2.2
and let y 2 b
is in a nontrivial block of y: Then (the order
complex of) the subposet [ - 0; y]nS of the interval [ - 0; y] is contractible.
Proof Note that is simply the interval [ - 0; y] in the poset Rn (S) of
Lemma 2.2. Restrict the map f of Lemma 2.2 to the interval -
I
I : The image of f consists of all partitions in I such that n is a
singleton, except for the - 0: Also f(y) 2 -
I : this is because n is not a singleton in
and hence f(y) 6= y: Clearly f(y) is the (unique) greatest element of f(I); and hence
f(I) is contractible. Now by the arguments of Lemma 2.2, I is contractible.
Proposition 2.7. Let y 2 P k
denote the interval ( - 0; y) in P k
denote the subset of the interval ( - 0; y) in Q k
obtained by removing the set M y;k of
all modular elements whose unique nontrivial block coincides with a block of
has size - k: Then the inclusion J ,! J 0 induces a homotopy equivalence of order
complexes.
Proof This follows by checking that the fibres are contractible, as in Theorem 2.1,
except that now we make use of Lemma 2.6. Note that removal of the elements in
the set M y;k is ncessary in order to apply the lemma.
Proposition 2.8. Let be as in Proposition 2.8. Then the homology of J
(and vanishes in all degrees different from rank \Pi n (y) \Gamma 3;
the top dimension of the interval J of P k
Proof Proposition 2.7 says that J and J 0 have the same homology. There are two
observations. First, J 0 is obtained from the interval ( - 0; y) in \Pi n by deleting
an antichain. Hence by Theorem 1.1, the nonzero homology of J 0 can occur only
in degrees rank \Pi n 3: Second, the dimension of the order
complex of J is the smaller of these two degrees. The result follows.
be an interval in the poset P k
First assume there
are two nontrivial blocks of x which are contained in distinct blocks of y: In this
case it is clear that the interval [x; y] of P k
coincides with the interval between x
and y in \Pi n ; and is therefore Cohen-Macaulay.
Next suppose all the nontrivial blocks of x are contained in a single block of y:
Let a i be the size of the nontrivial block A i of s be the size
of the nontrivial block B of y which contains them. Note that r - 2: Let x 0 be
the partition of the set B induced by x (x 0 has r nontrivial blocks A i and
singletons). Then the interval [x; y] of P k
n is isomorphic to a product of the interval
together with a collection of partition lattices.
These observations and the preceding results show that P k
n is Cohen-Macaulay
if and only if all intervals of the form [x; - 1] have homology which vanishes in all
dimensions less than the highest. Although the analogue of Theorem 2.1 does hold
for such intervals, this fact is not as helpful in this case. The difficulty occurs
because there is no longer a shift in the dimensions of the order complexes of the
intervals J and J
Proposition 2.9. Let be an interval in P k
be the interval
in the poset Q k
the inclusion map J ,! J 0 is a homotopy equivalence,
and hence J and J 0 can have nonvanishing reduced homology only in dimension
denotes the rank function of
Proof The statements of the theorem are immediate if J 0 (and hence J) coincides
with the interval [x; - 1] of \Pi n ; i.e, if x is not smaller than a modular element of type
Hence we consider the other case.
We use the same argument as in Theorem 2.1. We need to show that the fibres
ag for a 2 J 0 nJ of type (j; 1 n\Gammaj are contractible.
Let B be the unique nontrivial block of a:
Just as in the proof of Theorem 2.1, the fibre F a is isomorphic to a poset Rm (S)
as in Lemma 2.2, for a suitable choice of m (namely, the number of blocks of a)
and S: Hence it is contractible by the lemma.
The conclusion now follows from Theorem 1.1.
Fix an integer a between 2 and k: Define T -k
n;a to be the
subposet obtained from \Pi n by deleting all modular elements x of type (j; 1 n\Gammaj ); a -
such that the unique nontrivial block of x contains the a largest integers
n;a to be the subposet obtained from \Pi n by
deleting all modular elements x of type (k; 1 n\Gammak ); such that the unique nontrivial
block of x contains the elements
n be of rank
and assume x has at least one singleton block. Then it is easy to see that [x; -
is
poset isomorphic to T -k
is poset isomorphic to T =k
parameters m and a (viz., m is the number of blocks of x; and a is the number of
nontrivial blocks of x).
Hence Proposition 2.9 may be rephrased as follows:
1: The inclusion T -k
n;a is a homotopy equivalence.
Note that the order complexes of T -k
n;a and T =k
n;a both have the same dimension
hence by Theorem 1.1 we can only conclude that they both have
nonvanishing homology only in degrees 4: Moreover from Corollary
1.3 we have
dim ~
In particular, since the right-hand side is clearly positive, we are forced to conclude
that homology is nonzero in degree (n \Gamma 3):
Fortunately it is not hard to show that
Proposition 2.10. The posets T =k
n;a are (pure) shellable. Hence the posets T -k
n;a and
n;a are both homotopy equivalent to a wedge of (n \Gamma a)!
(n\Gammaa)!
spheres
of dimension (n \Gamma 3): Hence (the order complexes of) all intervals of the form [x; - 1]
and [x; y]; x 6= - 0; in P k
n have the homotopy type of a wedge of spheres.
Proof We shall use the following simple EL-labelling of the partition lattice due
to Wachs [Wac]. If u ! v is a covering relation in \Pi n ; so that v is obtained from
u by merging two blocks define the label of the edge v) to be
We shall show that this EL-labelling restricts to an EL-labelling of
With respect to this labelling, there is a unique strictly increasing chain c (x;y)
in every interval (x; y) of \Pi Proposition 2.8], it suffices to show that for
every x ! y in T =k
n;a ; the chain c (x;y) is a chain of T =k
We need only consider those elements
n;a such that the interval
contains elements forbidden in T =k
Such an element z must have a unique
nontrivial block B of size k containing the a largest integers
the unique strictly increasing chain c contains
the element z; since x 6= z; it must therefore have the label n on one of its edges.
This edge can only be the last edge of the chain, which implies that z = z
contradicting the fact that y 2 T =k
The remaining statements follow from the remarks preceding the proposition.
Putting together the work of this section, we have shown
Theorem 2.11. The poset P k
n is Cohen-Macaulay over the integers, and for all
the homology of the interval [x; y] is free.
Small examples suggest that all the intervals in fact have a nice topological
structure. We have
Conjecture 2.12. For the order complex of P k
n is shellable. More
generally, any interval [x; y] in the poset P k
n is either contractible or homotopy
equivalent to a wedge of spheres of the highest possible dimension rank(y)\Gammarank(x)\Gamma
(here rank is the rank function in P k
In particular P k
n is homotopy equivalent
to a wedge of (fi k
For
n is simply a rank-selected subposet of \Pi n ; hence its order complex
is shellable by [Bj1, Theorem 4.1]). It follows from the general theory of shellability
(see [Bj2, Theorem 1.3] and [BW, Theorem 4.1]) that the order complex has the
homotopy type of a wedge of spheres. The subposet P 3
(in fact the intersection
lattice of a codimension 2 orbit arrangement), was shown to be CL-shellable by
this author and V. Welker (1993, unpublished), and independently in recent far-reaching
work of Kozlov ([Kz]). However this argument seems to break down at a
can be seen that upper intervals (x; - 1) in P k
n are not
totally semimodular, making it difficult to show CL-shellability.
Similarly, we conjecture that
Conjecture 2.13. All intervals [x; y]; in the posets Q k
are either contractible or homotopy equivalent to a wedge of
spheres all of the same dimension d; where d is either the highest possible dimension
or one less. In particular
equivalent to a wedge of (fi k
3. The representation of the symmetric group Sn on the homology
In this section all homology is taken over the field of complex numbers. We shall
first compute the Sn-module structure of the unique nonvanishing homology of the
poset
: For this we need to recall some of the results of [Su]. For a finite poset Q
and a finite group G of autormorphisms of Q; we denote by Alt(Q) the Lefschetz
(G-)module of Q; i.e.,
Theorem 3.1. (See [Su, Theorem 1.10 and Remark 1.11]) Let P be a Cohen-Macaulay
poset of rank finite group of automorphisms of G-invariant
subposet of P:
Then as G-modules:
P\Omega ~
G c
where the sum runs over all representatives of G-orbits of chains c of elements not
in Q; and G c is the stabiliser of the chain c in P:
In the special case when PnQ is an antichain, this result simplifies to give
Theorem 3.2. Let P be a Cohen-Macaulay poset of rank r and G a finite group
of automorphisms of P: Let Q a G-invariant subposet of P such that PnQ is an
antichain. Then as a G-module, the Lefschetz module Alt(Q) of Q is determined
by
P\Omega ~
Another way to obtain Theorem 3.2 is to observe that all the maps in the exact
homology sequence of the pair (P; Q) are G-equivariant; consequently the proof of
Theorem 1.1 can be made G-equivariant to yield Theorem 3.2.
The hypotheses of the next theorem arise frequently in the study of subposets of
the partition lattice. The theorem is a general result on the homology representation
of upper intervals in posets of partitions, and was used extensively in [Su]. The
details of the proof are identical to the proof of [Su, Theorem 1.4].
Theorem 3.3. [Su] Let An ' \Pi n be a family of posets of set partitions and let
An be of type - where - is an integer partition of n with m i blocks of size
i: Assume that (x; - 1) An is poset isomorphic to a poset B r ; where r is the number
of blocks of x: There is an action of the symmetric group S r on the poset B r ; by
permuting the blocks of x: Let ff r denote the (possibly virtual) representation of S r
on the Lefschetz module Alt(B r Note that there is a copy of \Theta i Sm i in S r : Let G -
denote the stabiliser of x; thus G - is the direct product of wreath product groups
\Theta i Sm i [S i ]; where S a [S b ] is the wreath product group obtained by letting S a act on a
copies of
Finally assume that the restriction of the representation ff r to \Theta i Sm i can be
written (uniquely) as the following sum of irreducible modules:
ff r
- denotes the ordered tuple of partitions - (i) of denotes the the
irreducible Sm i -module indexed by the integer partition - (i) :
Then the (possibly virtual) representation of G - on the Lefschetz module of
An ) is given by
denotes the wreath product Sm i [S i ]-module of the irreducible V - (i)
with the trivial S i -module 1
The formula in the preceding theorem is more compactly expressed in terms of
the plethysm operation and symmetric functions; see [Su] for details.
For the purposes of this paper we shall only need to apply Theorem 3.3 to
the upper interval (x; - 1) of the partition lattice \Pi n ; when x is an element of type
In this case all the posets involved are Cohen-Macaulay. We write -n
for the representation of Sn on the top homology of \Pi n : The interval (x; - 1) is
isomorphic to the partition lattice \Pi n\Gammak+1 ; and hence in applying Theorem 3.3 we
need to compute the restriction of -n\Gammak+1 to Sn\Gammak \Theta S 1 : But by [St2] this is just
the regular representation of Sn\Gammak : Hence we have the following result, which was
also worked out in [Su].
Corollary 3.4. (See [Su, Example 2.11]) Let x be an element of type (k; 1 n\Gammak ) in
The representation of Sn\Gammak \Theta S k on the top homology of the interval (x; - 1) is
ae
where ae n\Gammak denotes the regular representation of Sn\Gammak :
It is now easy to compute the homology representation of Q k
Theorem 3.5. Let 2 - k - 1: The representation of the symmetric group Sn
on the unique nonvanishing homology ~
n ) is given by
(ae
Proof Let x 0 denote any partition of type (k; 1 n\Gammak Theorem 3.2 gives the following
equality of Sn-modules:
~
)\Omega ~
Now use Corollary 3.4 and the fact that ( - 0; x 0 ) is isomorphic to \Pi k :
Our next goal is to compute the homology representation of P k
We indicate two
approaches. The first is a straightforward application of Theorem 3.2, and uses the
same arguments as in the proof of Theorem 3.5.
Theorem 3.6. Let 2 - k - As an Sn-module the unique nonvanishing
homology ~
n is given by
here ae n\Gammak denotes the regular representation of Sn\Gammak :
Proof We proceed by induction on k: The result holds for by [Su, Theorem
2.10 and Example 2.11]. Assume it holds for all parameters
is the subposet of P
obtained by deleting the elements of type (k; 1 n\Gammak
if x 0 is a partition of type (k; 1 n\Gammak ); then using Theorem 3.2 (with
n and
we have the equality of Sn-modules
~
j\Omega ~
The interval
in P
n is isomorphic to a partition lattice, and the (Sn\Gammak \Theta
)-module structure of its homology follows from Corollary 3.4. The interval
in P
n is clearly isomorphic to P
induction hypothesis the
structure of the homology of P
k as an S k -module is given by the representation
It follows that as an (Sn\Gammak \Theta S k )-module, the homology of ( - 0; x
is
given by 1
Now by routine manipulations the result follows.
Corollary 3.7. Let 2 - k - 1: The character values of the representation of
the symmetric group Sn on the unique nonvanishing homology of P k
n and of Q k
for an element in Sn of cycle-type oe; are
d -(d)
d
d
d -(d)
d
d
d
0; otherwise:
Proof By a well-known result of Hanlon (see [Ha1, Theorem 4.1], [St2, Lemma
7.1]), the character values of the representation -n on an element of cycle-type oe
in Sn are given by
d
-(d)
d
d
d
0; otherwise:
Now the result follows from formula (3.1).
By Theorems 3.5 and 3.6, the posets P k
have Sn -isomorphic homology.
In fact we can show that the homotopy equivalence of Theorem 2.1 is an Sn -
homotopy, thereby establishing the result in another way. First we state the group-
equivariant version of Quillen's fibre lemma.
Theorem 3.8. (See, e.g., [Be, Chapter 6]) Let P and Q be bounded posets, let G
be a finite group of automorphisms of P and Q; and let
Q be an order-preserving
G-map of posets. For a 2 -
a denote the stabiliser of a: Assume
that for all a 2 -
Q; the fibre F a
ag is G a -contractible (i.e.,
the fixed-point subposet F Ga
a of points in F a fixed by G a ; is contractible). Then f
induces a G-homotopy equivalence of the order complexes \Delta(P ) and \Delta(Q): (The
same conclusion holds if the fibre F 0
ag is G a -contractible for
all a 2 -
In order to show that the homotopy equivalence of Theorem 2.1 is group equi-
variant, we need to show that the fibres F a in the proof of the theorem are G a -
contractible, where G a is the stabiliser of the element a of type (j; 1 n\Gammaj (Thus G a
is isomorphic to the Young subgroup Sn\Gammaj \Theta S j :) This in turn will follow from the
group-equivariant version of Lemma 2.2.
It is not hard to see that the homotopy equivalence of Lemma 2.2 is an
homotopy, where we identify with the subgroup of Sn which fixes n: For any
subgroup H of is easy to check that the map f restricts to a homotopy
equivalence on the fixed point subposet Rn (S) H consisting of points fixed by H;
and that the image remains contractible. Hence the posets Rn (S) are in fact
contractible.
Proposition 3.9. The inclusion c
more generally, for any subset
I the inclusion
c
induces an Sn -homotopy equivalence of the corresponding order complexes.
These observations also imply that the homotopy equivalence between
and -
\Pi n\Gamma1 in Theorem 2.4 is an -homotopy. Because the case
particular interest, we state it separately:
Corollary 3.10. The posets P
n and Q
more generally, the posets
are -homotopy equivalent to -
\Pi have homology modules that are Sn -
isomorphic to the representation
This is the representation of Sn computed by Sarah Whitehouse [W] on the tree
complex of Alan Robinson (see also [R], [RW] and [Ha2]). It follows from Corollary
3.8 that the restriction to Sn\Gamma1 is the representation
Denote by -
-n the lifting of -n\Gamma1 given by the representation (3.2). Let V (n\Gamma1;1)
denote the irreducible Sn-module indexed by the integer partition (n \Gamma
basic manipulations one sees that
a formula which appears in [GK].
4. Conclusion
In this final section we discuss some questions raised by the phenomena exhibited
in this paper for the partition lattice \Pi
Let Mn denote the subposet of \Pi n consisting of the modular partitions in -
together with the elements - 0 and - 1: Clearly Mn is just the truncated Boolean lattice
of subsets of an n-set, with the subsets of size 1 (i.e., the rank one elements) deleted.
It follows from Stanley's theory of R-labellings ([St3]) that the M-obius number is
On the other hand by Theorem 2.4, we know that P
number (\Gamma1) n\Gamma4 (n \Gamma 2)!
Hence we have, at the level of M-obius numbers, the equation
We also have the topological result that
The formula (3.3) of the preceding section further suggests that the factorisation
carries over to the homology, at the level of Sn-modules, with the introduction
of a sign twist. For by a result of Solomon ([So], see also [St2]), the representation
of Sn on the homology of Mn is precisely the irreducible indexed by the integer
partition (2; 1 n\Gamma2 Hence (3.3) says that as modules over the integers,
Mn
)\Omega ~
and as Sn-modules,
Mn
)\Omega ~
It would be interesting to see if these phenomena, e.g., (4.1) and (4.3), occur
for other instances of removing modular elements from a supersolvable geometric
lattice. For example, the analogues of (4.1) and (4.3) hold trivially for the Boolean
lattice, where every element is modular. The analogue of (4.2) however is clearly
false.
--R
"Representations and Cohomology, "
"Handbook of Combinatorics,"
"Geometry, Topology and Physics,"
The fixed-point partition lattices
Otter's Method and the Homology of Homeomorphically Irreducible k-Trees
"Elements of Algebraic Topology,"
Topological Results in Combinatorics
Homotopy properties of the Poset of Nontrivial p-Subgroups of a Group
The space of fully grown trees
The tree representation of
A decomposition of the group algebra of a finite Coxeter group
Some aspects of groups acting on finite posets
"Enumerative Combinatorics,"
The homology representations of the symmetric group on Cohen-Macaulay subposets of the partition lattice
A basis for the homology of the d-divisible partition lattice
Topology and Combinatorics of Ordered Sets
Homotopy type and Euler characteristic of partially ordered sets
--TR
Enumerative combinatorics
Otter''s method and the homology of homeomorphically irreducible <italic>k</italic>-trees
Topological methods
--CTR
Sarah Whitehouse, The Integral Tree Representation of the Symmetric Group, Journal of Algebraic Combinatorics: An International Journal, v.13 n.3, p.317-326, May 2001
Federico Ardila , Caroline J. Klivans, The Bergman complex of a matroid and phylogenetic trees, Journal of Combinatorial Theory Series B, v.96 n.1, p.38-49, January 2006 | group representation;poset;homology;homotopy;set partition |
316521 | Modifying a Sparse Cholesky Factorization. | Given a sparse symmetric positive definite matrix ${\bf AA}^{\sf T}$ and an associated sparse Cholesky factorization ${\bf LDL}^{\sf T}$ or ${\bf LL}^{\sf T}$, we develop sparse techniques for obtaining the new factorization associated with either adding a column to ${\bf A}$ or deleting a column from ${\bf A}$. Our techniques are based on an analysis and manipulation of the underlying graph structure and on ideas of Gill et al.\ [ Math. Comp., 28 (1974), pp. 505--535] for modifying a dense Cholesky factorization. We show that our methods extend to the general case where an arbitrary sparse symmetric positive definite matrix is modified. Our methods are optimal in the sense that they take time proportional to the number of nonzero entries in ${\bf L}$ and ${\bf D}$ that change. | Introduction
This paper presents a method for updating and downdating the sparse Cholesky
factorization LL T of the matrix AA T , where A is m by n. More precisely, we evaluate
the Cholesky factorization of AA T either oe is +1 (corresponding to
an update) and w is arbitrary, or oe is \Gamma1 (corresponding to a downdate) and w is a
column of A. Both AA T and AA T must be symmetric and positive definite.
The techniques we develop for the matrix AA T can be extended to determine the
effects on the Cholesky factors of a general symmetric positive definite matrix M of
any symmetric change of the form M preserves positive definiteness.
Our methods are optimal in the sense that they take time proportional to the number
of nonzero entries in L that change.
There are many applications of the techniques presented in this paper. In
the Linear Program Dual Active Set Algorithm (LP DASA) [19], the A matrix
corresponds to the basic variables in the current basis of the linear program, and
in successive iterations, we bring variables in and out of the basis, leading to changes
of the form AA T Other application areas where the techniques developed
in this paper are applicable include least-squares problems in statistics, the analysis
of electrical circuits, structural mechanics, sensitivity analysis in linear programming,
boundary condition changes in partial differential equations, domain decomposition
methods, and boundary element methods. For a discussion of these application areas
and others, see [18].
Section 2 introduces our notation. For an introduction to sparse matrix
techniques, see [4, 8]. In x3 we discuss the structure of the nonzero elements in
the Cholesky factorization of AA T and in x4, we discuss the structure associated
with the Cholesky factors of AA T . The symbolic update and downdate
methods provide the framework for our sparse version of Method C5 of Gill, Golub,
Murray, and Saunders [15] for modifying a dense Cholesky factorization. We discuss
our sparse algorithm in x5. Section 6 presents the general algorithm for modifying
the sparse Cholesky factorization for any sparse symmetric positive definite matrix.
Implementation details for the modification algorithm associated with AA T
are summarized in x7, while the results of a numerical experiment with a large
optimization problem from Netlib [3] are presented in x8. Section 9 concludes with a
discussion of future work.
Notation
Throughout the paper, matrices are capital bold letters like A or L, while vectors are
lower case bold letters like x or v. Sets and multisets are in calligraphic style like A,
L, or P. Scalars are either lower case Greek letters, or italic style like oe, k, or m.
Given the location of the nonzero elements of AA T , we can perform a symbolic
factorization (this terminology is introduced by George and Liu in [8]) of the matrix
to predict the location of the nonzero elements of the Cholesky factor L. In actuality,
some of these predicted nonzeros may be zero due to numerical cancellation during
the factorization process. The statement "l ij 6= 0" will mean that l ij is symbolically
nonzero. The diagonal of L is always nonzero since the matrices that we factor are
positive definite (see [24, p. 253]). The nonzero pattern of column j of L is denoted
while L denotes the collection of patterns:
Similarly, A j denotes the nonzero pattern of column j of A,
while A is the collection of patterns:
The elimination tree can be defined in terms of a parent map - (see [20]), which
gives the row index associated with a given node j of the first nonzero element in
column j of L beneath the diagonal element:
denotes the smallest element of
i:
Our convention is that the min of the empty set is zero. Note that j ! -(j) except
in the case where the diagonal element in column j is the only nonzero element. The
of the parent map is the children multifunction. That is, the children of
node k is the set defined by
The ancestors of a node j, denoted P(j), is the set of successive parents:
Here the powers of a map are defined in the usual way: - 0 is the identity while - i
is the i-fold composition of - with itself. The sequence of nodes j, -(j),
forming P(k), is called the path from j to the associated tree root. The
collection of paths leading to a root form an elimination tree, and the set of all trees
is the elimination forest. Typically, there is a single tree whose root is m, however,
if column j of AA T has only one nonzero element, the diagonal element, then j will
be the root of a separate tree.
The number of elements (or size) of a set X is denoted jX j, while jAj or jLj denote
the sum of the sizes of the sets they contain. Define the directed graph G(M) of an m
by m matrix M with nonzero pattern M as the vertex and edge sets:
is the vertex set and is the directed
edge set.
3 Symbolic factorization
Any approach for generating the pattern set L is called
23]. The symbolic factorization of a matrix of the form AA T is given in Algorithm 1
(see [9, 20]).
Algorithm 1 (Symbolic factorization of AA T )
do
min A k =j
A kA
end for
Algorithm 1 basically says that the pattern of column j of L can be expressed as
the union of the patterns of each column of L whose parent is j and the patterns of the
columns of A whose first nonzero element is j. The elimination tree, connecting each
child to its parent, is easily formed during the symbolic factorization. Algorithm 1
can be done in O(jLj
1 Asymptotic complexity notation is defined in [2]. We write there exist positive
constants c and n 0
such that 0 - f(n) - cg(n) for all n ? n 0
Observe that the pattern of the parent of node j contains all entries in the pattern
of column j except j itself [22]. That is,
Proceeding by induction, if k is an ancestor of j, then
This leads to the following relation between L j and the path P(j). The first part of
this proposition, and its proof, is given in [22]. Our proof differs slightly from the one
in [22]. We include it here since the same proof technique is exploited later.
Proposition 3.1 For each j, we have L j ' P(j); furthermore, for each k and j 2
Proof. Obviously, be any given element of L j with i 6= j. Since
we see that the following relation holds for
Now suppose that (2) holds for some integer l - 0, and let k denote - l (j). By (1)
and the fact that k - i, we have which implies that
Hence, either or (2) holds with l replaced by l + 1. Since (2) is violated
for l sufficiently large, we conclude that there exists an l for which
Consequently, each element of L j is contained in P(j), we have
is an ancestor of k, and P(j) ' P(k).
Since we have already shown that L j ' P(j), the proof is complete.
As we will see, the symbolic factorization of AA T can be obtained by
updating the symbolic factorization of AA T using an algorithm that has the same
structure as that of Algorithm 1. The new pattern L j is equal to the old pattern L j
union entries that arise from new children and from the pattern of the new column
(We put a bar over a matrix or a set or a multiset to denote its value after the
update or downdate is complete.)
Downdating is not as easy. Once a set union has been computed, it cannot be
undone without knowledge of how entries entered the set. We can keep track of
this information by storing the elements of L as multisets rather than as sets. The
multiset associated with column j has the form
where the multiplicity m(i; j) is the number of children of j that contain row index i
in their pattern plus the number of columns of A whose smallest entry is j and that
contain row index i. Equivalently,
With this definition, we can undo a set union by subtracting multiplicities.
We now define some operations involving multisets. First, if X ] is a multiset
consisting of pairs (i; m(i)) where m(i) is the multiplicity associated with i, then X
is the set obtained by removing the multiplicities. In other words, the multiset X
and the associated base set X satisfy the relation:
We define the addition of a multiset X ] and a set Y in the following way:
where
Similarly, the subtraction of a set Y from a multiset X ] is defined by
where
The multiset subtraction of Y from X ] undoes a prior addition. That is, for any
multiset X ] and any set Y, we have
In contrast ((X [ Y) n Y) is equal to X if and only if X and Y are disjoint sets.
Algorithm 2 below performs a symbolic factorization of AA T , with each set union
operation replaced by a multiset addition. This algorithm is identical to Algorithm 1
except for the bookkeeping associated with multiplicities.
Algorithm 2 (Symbolic factorization of AA T , using multisets)
do
for each c 2 - \Gamma1 (j) do
end for
for each k where min A do
end for
end for
We conclude this section with a result concerning the relation between the patterns
of AA T and the patterns of AA T +ww T .
Proposition 3.2 Let C and D be the patterns associated with the symmetric positive
definite matrices C and D respectively. Neglecting numerical cancellation, C j ' D j
for each j implies that (L C for each j, where L C and LD are the patterns
associated with the Cholesky factors of C and D respectively.
Proof. In [8, 21] it is shown that an edge (i; j) is contained in the graph of the
Cholesky factor of a symmetric positive definite matrix C if and only if there is a path
from i to j in the graph of C with each intermediate vertex of the path between 1 and
min fi; jg. If C j ' D j for each j, then the paths associated with the graph of C are
a subset of the paths associated with the graph of D. It follows that (L C
for each j.
Ignoring numerical cancellation, the edges in the graph of AA T are a subset of the
edges in the graph of AA T +ww T . By Proposition 3.2, we conclude that the edges in
the graphs of the associated Cholesky factors satisfy the same inclusion. As a result,
if the columns of A and the vectors w used in the update AA T +ww T are all chosen
from the columns of some fixed matrix B, and if a fill-reducing permutation P can be
found for which the Cholesky factors of PBB T P T are sparse ([1] for example), then
by Proposition 3.2, the Cholesky factors of PAA T P T and of P(AA T +ww T )P T will
be at least as sparse as those of PBB T P T .
4 Modifying the symbolic factors
Let A be the modified version of A. Again, we put a bar over a matrix or a set or a
multiset to denote its value after the update or downdate is complete. In an update
A is obtained from A by appending the column w on the right, while in a downdate,
A is obtained from A by deleting the column w from A. Hence, we have
where oe is either +1 and w is the last column of A (update) or oe is \Gamma1 and w is a
column of A (downdate). Since A and A differ by at most a single column, it follows
from Proposition 3.2 that L j ' L j for each j during an update, while L j ' L j during
a downdate. Moreover, the multisets associated with the Cholesky factor of either the
updated or downdated matrix have the structure described in the following theorem:
Theorem 4.1 Let k be the index associated with the first nonzero component of w.
For an update, P(k) ' P(k) and L ]
the complement of P(k).
That is, L
i for all i except when i is k or one of the new ancestors of k. For
a downdate, P(k) ' P(k) and L ]
. That is, L
except when i is k or one of the old ancestors of k.
Proof. To begin, let us consider an update. We will show that each element of P(k)
is a member of P(k) as well. Clearly, k lies in both P(k) and P(k). Proceeding by
induction, suppose that - 0 (k),
We need to show that to complete the induction.
P(k), the induction step is complete, and - j+1
If -(l) 6= -(l), then by Proposition 3.2, -(l) ! -(l), and the following relation
holds for
Now suppose that (3) holds for some integer p - 1, and let q denote - p (l). By
Proposition 3.2, -(l) 2 L l ' L l , and combining this with (3),
It follows from (1) that -(l) 2 L q for (l). By the definition of the parent,
Hence, either - p+1 holds with p replaced by p + 1. Since (3) is
violated for p sufficiently large, we conclude that there exists an integer p such that
-(l), from which it follows that
P(k), the induction step is complete and P(k) ' P(k).
Suppose that l 2 P(k) c . It is now important to recall that k is the index of the
first nonzero component of w, the vector appearing in the update. Observe that l
cannot equal k since l 2 P(k) c and k 2 P(k). The proof that L ]
l is by induction
on the depth d defined by
If then l has no children, and the child loop of Algorithm 2 will be skipped
when either L ]
l or L ]
l are evaluated. And since l 6= k, the pattern associated with w
cannot be added into L ]
l . Hence, when
l is trivial. Now,
assuming that for some p - 0, we have L
l whenever l 2 P(k) c and d(l) - p,
let us suppose that
by the induction assumption. And since l 6= k, the pattern of w is not
added to L ]
l . Consequently, when Algorithm 2 is executed, we have L
l , which
completes the induction step.
Now consider the downdate part of the theorem. Rearranging the downdate
relation AA T
Hence, in a downdate, we can think of A as the updated version of A. Consequently,
the second part of the theorem follows directly from the first part.
4.1 Symbolic update algorithm
We now present an algorithm for evaluating the new pattern L associated with an
update. Based on Theorem 4.1, the only sets L j that change are those associated
with P(k) where k is the index of the first nonzero component of w. Referring to
Algorithm 2, we can set marching up the path
from j, and evaluate all the changes induced by the additional column in A. In order
to do the bookkeeping, there are at most four cases to consider:
Case 1. At the start of the new path, we need to add the pattern for w to L ]
.
Case 2. c 2 P(k), In this case, c is a child
of j in both the new and the old elimination tree. Since the pattern L c may
differ from L c , we need to add the difference to L ]
j . Since j has a unique child
on the path P(k), there is at most one node c that satisfies these conditions.
Also note that if
then by Theorem 4.1, L and hence, this node does not lead to an
adjustment to L ]
in Algorithm 2.
Case 3. In this case, c is a child of j in the
new elimination tree, but not in the old tree, and the entire set L c should be
added to L ]
since it was not included in L ]
. By Theorem 4.1,
all
and hence, c 62 P(k) c , or equivalently, c 2 P(k). Again, since each node on the
path P(k) from k has only one child on the path, there is at most one node c
satisfying these conditions, and it lies on the path P(k).
Case 4. In this case, c is a child of j
in the old elimination tree, but not in the new tree, and the set L c should be
subtracted from L ]
since it was previously added to L ]
each , the fact that implies that c 2 P(k). In the
algorithm that follows, we refer to nodes c that satisfy these conditions as lost
children.
In each of the cases above, every node c that led to adjustments in the pattern was
located on the path P(k). In the detailed algorithm that appears below, we simply
march up the path from k to the root making the adjustments enumerated above.
Algorithm 3 (Symbolic update, add new column w)
Case 1: first node in the path
while j 6= 0 do
Case 2: c is an old child of j, possibly changed
else
Case 3: c is a new child of j and a lost child of -(c)
place c in lost-child-queue of -(c)
Case 4: consider each lost child of j
for each c in lost-child-queue of j do
end for
while
The time taken by this algorithm is given by the following lemma.
Lemma 4.2 The time to execute Algorithm 3 is bounded above by a constant times
the number of entries associated with patterns for nodes on the new path P(k). That
is, the time is
OB @
Proof. In Algorithm 3, we simply march up the path P(k) making adjustments to
j as we proceed. At each node j, we take time proportional to jL j j, plus the time
taken to process the children of j. Each node is visited as a child c at most twice,
since it falls into one or two of the four cases enumerated above (a node c can be a
new child of one node, and a lost child of another). If this work (proportional to jL c j
or jL c j) is accounted to step c instead of j, the time to make the adjustment to the
pattern is bounded above by a constant times either jL j j or jL j j. Since jL
by Theorem 4.1, the proof is complete.
In practice, we can reduce the execution time for Algorithm 3 by skipping over
any nodes for which L ]
. That is, if the current node j has no lost children, and
if its child c falls under case 2 with L
j and the update
can be skipped. The execution time for this modified algorithm, which is the one we
implement, is
4.2 Symbolic downdate algorithm
Let us consider the removal of a column w from A and let k be the index of the first
nonzero entry in w. The symbolic downdate algorithm is analogous to the symbolic
update algorithm, but the roles of P(k) and P(k) are interchanged in accordance
with Theorem 4.1. Instead of adding entries to L ]
j , we subtract entries, instead of
lost child queues, we have new child queues, instead of walking up the path P(k), we
walk up the path P(k) ' P(k).
Algorithm 4 (Symbolic downdate, remove column w)
Case 1: first node in the path
while j 6= 0 do
Case 2: c is an old child of j, possibly changed
else
Case 3: c is a lost child of j and a new child of -(c)
place c in new-child-queue of -(c)
Case 4: consider each new child of j
for each c in new-child-queue of j do
end for
while
Algorithm 4
Similar to Algorithm 3, the execution time obeys the following estimate:
Lemma 4.3 The time to execute Algorithm 4 is bounded above by a constant times
the number of entries associated with patterns for nodes on the old path P(k). That
is, the time is
Again, we achieve in practice some speedup when we check whether or not L ]
changes for any given node j 2 P(k). The time taken by this modified Algorithm 4
is
5 The numerical factors
When we add or delete a column in A, we update or downdate the symbolic
factorization in order to determine the location in the Cholesky factor of either new
nonzero entries or nonzero entries that are now zero. Knowing the location of the
nonzero entries, we can update the numerical value of these entries. We first consider
the case when A and L are dense and draw on the ideas of [15]. Then we show how
the method extends to the sparse case.
5.1 Dense matrices
Our algorithm to implement the numerical update and downdate is based on the
Method C5 in [15] for dense matrices. To summarize their approach, we start by
writing
I
is positive on account
of our assumption that both AA T and AA T are positive definite. That is, since
the matrices AA T
and I are congruent, and by
Sylvester's law of inertia (see [24]), they have the same number of positive, negative,
and zero eigenvalues. The eigenvalues of I are one, with multiplicity
(corresponding to the eigenvectors orthogonal to v) and 1 (corresponding
to the eigenvector v). Since AA T
is positive definite, its eigenvalues are all positive,
which implies that 1
Combining (4) and (5), we have
A sequence of Givens rotations, the product being denoted G 1 , is chosen so that
1 where e 1 is the vector with every entry zero except for the first entry.
The matrix has a lower Hessenberg structure. A second sequence of
Givens rotations, the product being denoted G 2 , is chosen so that (I
is a lower triangular matrix, denoted G, of the following form:
where ffi and fl are vectors computed in Algorithm 5 below. The diagonal elements of
G are given by g while the elements below the diagonal are
Algorithm 5 (Compute G, dense case)
else
find G 1 , to zero out all but first entry of v:
to 1 do
end for
find G 2 , and combine to obtain G:
end for
Algorithm 5
In Algorithm 6 below, we evaluate the new Cholesky factor
forming G explicitly [15] by taking advantage of the special structure of G. The
product is computed column by column, moving from right to left. In practice, L can
be overwritten with L.
Algorithm 6 (Compute
to 1 do
to m do
end for
end for
Algorithm 6
Observe that about 1/3 of the multiplies can be eliminated if L is stored as a
product ~
LD, where D is a diagonal matrix. The components of ffi can be absorbed
into D, with fl adjusted accordingly, and ffi in Algorithm 6 can be eliminated. We did
not exploit this simplification for the numerical results reported in x8.
5.2 The sparsity pattern of v
In the sparse case, sparse, and its nonzero pattern is crucial. Since the
elements of G satisfy for each i ? j, we conclude that only the columns
of L associated with the nonzero components of v enter into the computation of L.
The nonzero pattern of v can be found using the following lemma.
Lemma 5.1 The nodes reachable from any given node k by path(s) in the directed
graph G(L T ) coincide with the path P(k).
Proof. If P(k) has a single element, the lemma holds. Proceeding by induction,
suppose that the lemma holds for all k for which jP(k)j - j. Now, if P(k) has
elements, then by the induction hypothesis, the nodes reachable from -(k) by path(s)
in the directed graph G(L T ) coincide with the path P(-(k)). The nodes reachable
in one step from k consist of the elements of L k . By Proposition 3.1, each of the
elements of L k is contained in the path P(k). If
By the induction hypothesis, the nodes reachable from i coincide with P(i) ' P(k).
The nodes reachable from k consist of fkg union the nodes reachable from L k . Since
that the nodes reachable from k are contained in P(k). On
the other hand, for each p, the element of L T in row - p (k) and column - p+1 (k) is
nonzero. Hence, all the elements of P(k) are reachable from k. Since the nodes in
coincide with the the nodes reachable from k by path(s) in the directed graph
G(L T ), the induction step is complete.
Theorem 5.2 During symbolic downdate AA T
(where w is a column
of A), the nonzero pattern of equal to the path P(k) in the (old)
elimination tree of L where
Proof. Let 0g. Theorem 5.1 of Gilbert [10, 11, 14] states that
the nonzero pattern of v is the set of nodes reachable from the nodes in W by paths
in the directed graph G(L T ). By Algorithm 1, W ' L k . Hence, each element of W
is reachable from k by a path of length one, and the nodes reachable from W are a
subset of the nodes reachable from k. Conversely, since k 2 W, the nodes reachable
from k are a subset of the nodes reachable from W. Combining these inclusions, the
nodes reachable from k and from W are the same, and by Lemma 5.1, the nodes
reachable from k coincide with the path P(k).
Corollary 5.3 During symbolic update AA T
the nonzero pattern
of equal to the path P(k) in the (new) elimination tree of L where k is
defined in (6).
Proof. Since , we can view L as the Cholesky factor for the
downdate Hence, we can apply Theorem 5.2, in effect replacing P by
P.
5.3 Sparse matrices
In the dense algorithm presented at the start of this section, we write
To be specific, let us consider the case where (7) corresponds to
an update. The nonzero elements of I lie along the diagonal or at
the intersection of rows i 2 P(k) and columns j 2 P(k), where k is defined in (6).
In essence, we can extract the submatrix of I corresponding to these rows
and columns, we can apply Algorithm 5 to this dense submatrix, we can modify the
submatrix of L consisting of rows and columns associated with P(k), and then we
can reinsert this dense submatrix in the m by m sparse matrix L. At the algebraic
level, we can think of this process in the following way: If P is the permutation matrix
with the property that the columns associated with P(k) are moved to the first jP(k)j
columns by multiplication on the right, then we have
The matrix P T diagonal with the identity in the lower
right corner and a dense upper left corner V,
!/
I
The elements of L 21 correspond to the elements l ij of L for which i 2 P(k) c and
follows that i 2 L c
Note that in general
which is why Algorithm 6 is column-oriented. When (8) is multiplied by the
Givens rotations computed by Algorithm 5, we obtain
!/
I
We apply P to the left and P T to the right in (9) to obtain L. Neither L 12 nor L 22
are modified.
When (7) corresponds to a downdate, the discussion is the same as for an
update except that P(k) replaces P(k), and the role of L j and L j are interchanged.
In summary, the sparse update or downdate of a Cholesky factorization can be
accomplished by first evaluating nonzero entries are contained
in the path P(k) or P(k) where k is defined in (6). We apply the dense Algorithm 5
to the vector of (symbolically) nonzero entries of v. Then we update the entries in L
in the rows and columns associated with indices in P(k) or P(k) using Algorithm 6.
As the following result indicates, Algorithm 6 does not have to be applied to all
of L 11 because of its specific structure.
Proposition 5.4 During symbolic downdate, L 11 is a lower triangular matrix with
a dense profile. That is, for each row i ? 1, there is an integer
Proof. The rows and columns columns of L 11 correspond to the nodes in the path
P(k). We assume that the nodes have been relabeled so that
It follows that [L 11
by (1), if Consequently, [L
Letting p i be the smallest such index p, the proof is complete.
Corollary 5.5 During symbolic update L 11 is a lower triangular matrix with a dense
profile. That is, for each row i ? 1, there is an integer
Proof. This follows immediately from Proposition 5.4, replacing P(k) with P(k).
As a consequence of Proposition 5.4, Corollary 5.5, and Proposition 3.2, we can
skip over any index i in Algorithm 6 for which Algorithm 6 is applied
to the sparse L, as opposed to the dense submatrix, the indices j and i take values
in P(k) and L j respectively for a downdate, while they take values in P(k) and L j ,
respectively, for an update.
6 Arbitrary symbolic and numerical factors
The methods we have developed for computing the modification to the Cholesky
factors of AA T corresponding to the addition or deletion of columns in A can be
used to determine the effect on the Cholesky factors of a general symmetric positive
definite matrix M of any symmetric change of the form M preserves
positive definiteness. We briefly describe how Algorithms 1 through 6 are modified
for the general case.
Let M j denote the nonzero pattern of the lower triangular part of M:
The symbolic factorization of M [5, 6, 7, 8, 23] is obtained by replacing the union of
A k terms in Algorithm 1 with the set M j . With this change, L j of Algorithm 1 is
given by
This leads to a change in Algorithm 2 for computing the multiplicities. The
multiplicity of an index i in L j becomes
The for loop involving the A k terms in Algorithm 2 is replaced by the single statement
More precisely, we have:
for each k where min A do
end for
Entries are removed or added symbolically from AA T by the deletion or addition
of columns of A, and numerical cancellation is ignored. Numerical cancellation of
entries in M should not be ignored, however, because this is the only way that entries
can be dropped from M. When numerical cancellation is taken into account, neither
of the inclusions M may hold. We resolve this problem by using
a symbolic modification scheme with two steps: a symbolic update phase in which
new nonzero entries in M are taken into account, followed by a separate
downdate phase to handle entries that become numerically zero. Since each
modification step now involves an update phase followed by a downdate phase, we
attach (in this section) a overbar to quantities associated with the update and an
underbar to quantities associated with the downdate.
Let W be the nonzero pattern of w, namely 0g. In the first
phase, entries from W are symbolically added to M j for each j 2 W. That
In the second symbolic phase, entries from W are symbolically deleted for each j 2 W:
In practice, we need to introduce a drop tolerance t and replace the equality
in (10) by the inequality t. For a general matrix, the analogue of
Theorem 4.1 is the following:
Theorem 6.1 If ff is the first index for which M ff 6= M ff , then P(ff) ' P(ff)
and L ]
i for all i 2 P(ff) c . If fi is the first index for which M fi 6= M fi , then
In evaluating the modification in the symbolic factorization associated with
we start at the first index ff where M ff 6= M ff and we march up
the path P(ff) making changes to L ]
j . In the second phase, we start at
the first index where M fi 6= M fi , and we march up the path P(fi) making changes to
. The analogue of Algorithm 3 in the general case only differs in the
starting index (now ff) and in the addition of the sets M j n M j in each pass through
the j-loop:
Algorithm 7a (Symbolic update phase, general matrix)
Case 1: first node in the path
while j 6= 0 do
Case 2: c is an old child of j, possibly changed
else
Case 3: c is a new child of j and a lost child of -(c)
place c in lost-child-queue of -(c)
Case 4: consider each lost child of j
for each c in lost-child-queue of j do
end for
while
Algorithm 7a
Similarly, the analogue of Algorithm 4 in the general case only differs in the
starting index (now fi), in the subtraction of the sets and M j n M j in each pass
through the j-loop.
Algorithm 7b (Symbolic downdate phase, general matrix)
Case 1: first node in the path
while j 6= 0 do
Case 2: c is an old child of j, possibly changed
else
Case 3: c is a lost child of j and a new child of -(c)
place c in new-child-queue of -(c)
Case 4: consider each new child of j
for each c in new-child-queue of j do
end for
while
Algorithm 7b
Algorithms 5 and 6 are completely unchanged in the general case. They can be
applied after the completion of Algorithm 7b so that we know the location of new
nonzero entries in the Cholesky factor. They process the submatrix associated with
rows and columns in P(k) where k is the index of the first nonzero element of w.
When M has the form AA T and when M is gotten by either adding or deleting a
column in A, then assuming no numerical cancellations, Algorithm 7b can be skipped
when we add a column to A since M for each j. Similarly, when a column
is removed from A, Algorithm 7a can be skipped since M for each j. Hence,
when Algorithm 7a followed by Algorithm 7b are applied to a matrix of the form
AA T , only Algorithm 7a takes effect during a update while only Algorithm 7b takes
effect during a downdate. Thus the approach we have presented in this section for an
arbitrary symmetric positive definite matrix generalizes the earlier approach where
we focus on matrices of the form AA T .
7 Implementation issues
In this section, we discuss implementation issues in the context of updates and
downdates to a matrix of the form AA T . A similar discussion applies to a general
symmetric positive definite matrix. We assume that the columns of the matrix A
in the product AA T are all chosen from among the columns of some fixed matrix
B. The update and downdate algorithms can be used to compute the modification
to a Cholesky factor corresponding to additions or deletions to the columns of A.
The Cholesky factorization of the initial AA T (before any columns are added or
deleted) is often preceded by a fill reducing permutation P of the rows and columns.
For example, we could compute a permutation to reduce the fill for BB T since the
Cholesky factors of AA T will be at least as sparse as those of BB T by Proposition 3.2,
regardless of how the columns of A are chosen from the columns of B. Based on the
number of nonzeros in each column of the Cholesky factors of BB T , we could allocate
a static storage structure that will always contain the Cholesky factors of each AA T .
On the other hand, this could lead to wasted space if the number of nonzeros in the
Cholesky factors of AA T are far less than the number of nonzeros in the Cholesky
factors of BB T . Alternatively, we could store the Cholesky factor of the current AA T
in a smaller space, and reallocate storage during the updates and downdates, based
on the changes in the nonzero patterns.
The time for the initial Cholesky factorization of AA T is given by (see [8]) the
following
which is O(m 3 ) if L is dense. Each update and downdate proceeds by first finding
the new symbolic factors and the required path (P(k) for updating, and P(k) for
downdating), using Algorithm 3 or 4 (modified to skip in constant time any column
j that does not change). Algorithms 5 and 6 are then applied to the columns in the
path. The lower triangular solve of the system Algorithm 5 can be done
in time
during downdating, and
OB @
during updating [14]. The remainder of Algorithm 5 takes time O(jP(k)j) during
downdating, and O(jP(k)j) during updating. Our sparse form of Algorithm 6
discussed in x5.3, which computes the product
during downdating, and
OB @
during updating. Since L j ' L j during an update, it follows that the asymptotic
time for the entire downdate or update, both symbolic and numeric, is equal to the
asymptotic time of Algorithm 6. This can be much less than the O(m 2
by Algorithms 5 and 6 in the dense case.
8 Experimental results
We have developed Matlab codes to experiment with all the algorithms presented in
this paper, including the algorithms of x6 for a general symmetric, positive definite
matrix. In this section, we present the results of a numerical experiment with a large
sparse optimization problem from Netlib [3]. The computer used for this experiment
was a Model 170 UltraSparc, equipped with 256MB of memory, and with Matlab
Version 4.2c.
8.1 Experimental design
We selected an optimization problem from airline scheduling (DFL001). Its constraint
matrix B is 6071-by-12,230 with 35,632 nonzeros. The matrix BB T has 37,923
nonzeros in its strictly lower triangular part. Its Cholesky factor LB has 1.49 million
nonzeros (with a fill-minimizing permutation PB of the rows of B, described below)
and requires 1.12 billion floating-point operations and 115 seconds to compute (the
LP Dual Active Set Algorithm does not require this matrix, however, as noted earlier,
this is an upper bound on the number of nonzeros that can occur during the execution
of the LP DASA). This high level of fill-in in LB is the result of the highly irregular
nonzero pattern of B. The basis matrix A 0 corresponding to an optimal solution of
the linear programming problem has 5,446 columns.
We wrote a set of Matlab scripts that implements our complete Cholesky
update/downdate algorithm, discussed in x7. We first found PB , using 101 trials
of Matlab's column multiple minimum degree ordering algorithm (colmmd [12]), 100
of them with a different random permutation of the rows of B. We then took the
best permutation found. With this permutation (PB ), the factor L of A 0 A T
0 has 831
thousand nonzeros, and took 481 million floating-point operations and 51 seconds to
compute (using Matlab's chol). Following the method used in LP DASA, we added
\Gamma12 to the diagonal to ensure positive definiteness. We used the same permutation
PB for the entire experiment. The initial symbolic factorization took 15 seconds
(Algorithm 2). It is this matrix and its factor that are required by the LP DASA.
We did not use Matlab's sparse matrix data structure, since Matlab removes
explicit zeros. Changing the nonzero pattern by a single entry can cause Matlab to
make a new copy of the entire matrix. This would defeat the asymptotic performance
of our algorithms. Instead, the column-oriented data structure we use for L, L ] , and
L consists of three arrays of length jL B j, an array of length m that contains indices
to the first entry in each column, and an array of length m holding the number of
nonzeros in each column. The columns are allocated so that each column can hold as
many nonzeros as the corresponding column of LB , without reallocation.
Starting with the optimal basis of 5,446 columns, we added one column at a time
until the basis included all 12,230 columns, and then removed them one at a time
to obtain the optimal basis again. The total time and work required to modify the
factors was 76,598 seconds and 61.5 billion floating point operations. This divides
into
(a) 441 seconds for bookkeeping to keep track of the basis set,
(b) 6,051 seconds for the symbolic updates and downdates (Algorithms 3 and 4),
(c) 21,167 seconds to solve
(d) 4,977 seconds for the remainder of Algorithm 5, and
43,962 seconds for Algorithm 6.
Algorithm 6 clearly dominates the modification algorithm. The symbolic updates and
downdates took very little time, even though Algorithms 3 and 4 are not well suited
to implementation in Matlab. By comparison, using the Cholesky factorization to
solve a linear system LL T with a dense right-hand-side b (using our column-
oriented data structure for L) at each step took a total of 93,418 seconds and 67.7
billion floating point operations (each solve takes O(jLj) time). Note that this is
much higher than the time taken to solve Algorithm 5, because v and w
are sparse. The time taken for the entire update/downdate computation would be
much smaller if our code was written in a compiled language. Solving one system
with a dense right hand side (using the factorization of the optimal basis)
takes 5.53 seconds using our column-oriented data structure, 1.26 seconds using a
Matlab sparse matrix for L, and 0.215 seconds in Fortran 77.
8.2 Numerical accuracy
In order to measure the error in the computed Cholesky factorization, we evaluated
the difference kAA
is the computed Cholesky factorization.
For the airline scheduling matrix of x8, - L has up to 1.49 million nonzeros and it is
impractical to compute the product -
after each update. To obtain a quick and
accurate estimate for kEk 1 , where
, we applied the strategy presented
in [16] (see [17, p. 139] for a symbolic statement of the algorithm) to estimate the
1-norm of the inverse of a matrix. That is, we used a gradient accent approach to
compute a local maximum for the following problem:
L is used multiple times in the following algorithm, we copied our data structure
for -
L into a Matlab sparse matrix.
Algorithm 8 (Estimate 1-norm of an m by m matrix E)
(ae is the current estimate for kEk)
while do
do
end for
while
Algorithm 8
To improve the accuracy of the 1-norm estimate, we used Algorithm 8 three times.
In the second and third trials, a different starting vector x was used as described in
[16]. Observe that Algorithm 8 only makes use of the product between the matrix
E and a vector. This feature is important in the context of sparse matrices since E
contains the term -
and it is impractical to compute the product -
, but it is
practical to multiply -
by a vector. For the airline scheduling matrix of x8 the
values for kEk 1 initially, at step 6,784, and at the end were 2:49
and 1:54\Theta10 \Gamma10 respectively. The estimates obtained using Algorithm 8 were identical
at the same three steps. On the other hand, the times to compute kEk 1 at the initial
step and at step 6,784 were 137.9 and 279.5 seconds, while the times for three trials of
Algorithm 8 were 8.7 and 14.7 seconds, respectively (excluding the time to construct
the Matlab sparse matrix for -
L). Our methods were quite accurate for this problem.
After 6,784 updates and 6,784 downdates, or 13,568 changes in A, the 1-norm of E
increased by only a factor 618! Figure 1 shows the estimated value of kEk 1 computed
every steps using Algorithm 8. The 1-norm of the matrix AA T increases from
458.0 initially to 1107.0 at iteration 6,784, then returns to 458.0 at iteration 13,568.
Hence, the product of the computed Cholesky factors agrees with the product AA T
to about 15 significant digits initially, while the products agree to about 12 significant
digits after 13,568 modifications of A.
step
norm,
as
estimated
by
Algorithm
Figure
1: Estimated 1-norm of error in Cholesky factorization
8.3 Alternative permutations
Our methods are optimal in the sense that they take time proportional to the number
of nonzero entries in L that change at each step. However, they are not optimal with
respect to fill-in, since we assume a single initial permutation, and no subsequent
permutations. A fill-reducing ordering of BB T might not be the best ordering to use
for all basis matrices. A simple pathological example is the m-by-n matrix B, where
and the nonzero pattern of each of the column of B is a unique
pair of integers from the set f1; 2; mg. In this case, every element of BB T is
nonzero, while the nonzero pattern of AA T is arbitrary. As the matrix A changes, it
might be advantageous to compute a fill-reducing ordering of AA T if the size of its
factors grow "too large." A refactorization with the new permutation would then be
required.
We found a fill-reducing permutation PA of the optimal basis A 0 A T
(again, the
best of 101 trials of colmmd). This results in a factor L with 381 thousand nonzeros,
requiring only 169 million floating point operations to compute. This is significantly
less than the number of nonzeros (831 thousand) and floating point operations (481
million) associated with the fill reducing permutation for BB T . We also computed
an ordering of AA T at each step, using colmmd just once, and then computed the
number of nonzeros in the factor if we were to factorize A using this permutation
Although it only takes about 1 second to compute the ordering [12] and symbolic
step
in
Nonzeros in L for three different permutations
Figure
2: Nonzeros in L using three different permutations
factorization [13], it is not practical to use 101 random trials at each step.
Figure
2 depicts the nonzero counts of L for these three different permutations,
at each of the 13,568 steps. The fixed permutation PB results in the smooth curve
starting at 831 thousand and peaking at 1.49 million. The fixed permutation PA
results in a number of nonzeros in L that starts at 381 thousand and rises quickly,
leaving the figure at step 1,206 and peaking at 7.4 million in the middle. It surpasses
PB at step 267. Using a permutation, P s , computed at each step s, gives the
erratic line in the figure, starting at 390 thousand and peaking at 1.9 million in the
middle. These results indicate that it might be advantageous to start with the fixed
permutation PA , use it for 267 steps, and then refactorize with the permutation P s
computed at step 267. This results in a new factor with only 463 thousand nonzeros.
Near the center of the figure, however, the basis A includes most of the columns in
B, and in this case the PB permutation should be used.
9
Summary
We have presented a new method for updating and downdating the factorization LL T
of a sparse symmetric positive definite matrix AA T . Our experimental results show
that the method should be fast and accurate in practice. Extensions to an arbitrary
sparse symmetric positive definite matrix, M, have been discussed. We mention here
an additional extension to our work that would be useful.
We do not make use of the supernodal form of the factorization, nor do we use
the related compressed pattern of L [8]. Any method can be used for the numerical
factorization of the first basis matrix, of course, but the factor would then be copied
into a simple column-oriented data structure. Keeping the supernodal form has the
potential of reducing time taken by the symbolic factorization (Algorithm 2), the
symbolic update and downdate (Algorithms 3 and 4), and the numerical update and
downdate (dense matrix kernels could be used in Algorithms 5 and 6). However,
supernodes would merge during update, and split during downdate, complicating the
supernodal form of the factorization.
--R
An approximate minimum degree ordering algorithm
Introduction to Algorithms
Distribution of mathematical software via electronic mail
Direct Methods for Sparse Matrices
Yale sparse matrix package
The design of a user interface for a sparse matrix package
A data structure for sparse QR and LU factorizations
Predicting structure in sparse matrix computations
Sparse matrices in MATLAB: design and implementation
An efficient algorithm to compute row and column counts for sparse Cholesky factorization
Sparse partial pivoting in time proportional to arithmetic operations
Methods for modifying matrix factorizations
Active set strategies in the LP dual active set algorithm
The role of elimination trees in sparse factorization
Algorithmic aspects of vertex elimination on graphs
A new implementation of sparse Gaussian elimination
On the efficient solution of sparse systems of linear and nonlinear equations
New York
--TR
--CTR
W. Hager, The Dual Active Set Algorithm and Its Application to Linear Programming, Computational Optimization and Applications, v.21 n.3, p.263-275, March 2002
Ove Edlund, A software package for sparse orthogonal factorization and updating, ACM Transactions on Mathematical Software (TOMS), v.28 n.4, p.448-482, December 2002
Matine Bergounioux , Karl Kunisch, Primal-Dual Strategy for State-Constrained Optimal Control Problems, Computational Optimization and Applications, v.22 n.2, p.193-224, July 2002
Frank Dellaert , Michael Kaess, Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing, International Journal of Robotics Research, v.25 n.12, p.1181-1203, December 2006
Nicholas I. M. Gould , Jennifer A. Scott , Yifan Hu, A numerical evaluation of sparse direct solvers for the solution of large sparse symmetric linear systems of equations, ACM Transactions on Mathematical Software (TOMS), v.33 n.2, p.10-es, June 2007
Olga Sorkine , Daniel Cohen-Or , Dror Irony , Sivan Toledo, Geometry-Aware Bases for Shape Approximation, IEEE Transactions on Visualization and Computer Graphics, v.11 n.2, p.171-180, March 2005 | numerical linear algebra;matrix updates;cholesky factorization;sparse matrices;mathematical software;direct methods |
316530 | Decay Rates of the Inverse of Nonsymmetric Tridiagonal and Band Matrices. | It is well known that the inverse of an irreducible nonsingular symmetric tridiagonal matrix is given by two sequences of real numbers, {ui} and {vi}, such that similar result holds for nonsymmetric matrices A. There the inverse can be described by four sequences {ui},{vi}, {xi},$ and {vi} with u xiyi. Here we characterize certain properties of A, i.e., being an M-matrix or positive definite, in terms of the ui, vi,xi, and yi. We also establish a relation of zero row sums and zero column sums of A and pairwise constant ui,vi, xi, and yi. Moreover, we consider decay rates for the entries of the inverse of tridiagonal and block tridiagonal (banded) matrices. For diagonally dominant matrices we show that the entries of the inverse strictly decay along a row or column. We give a sharp decay result for tridiagonal irreducible M-matrices and tridiagonal positive definite matrices. We also give a decay rate for arbitrary banded M-matrices. | Introduction
In many mathematical problems which gives rise to a linear system of equations the
system matrix is tridiagonal or block tridiagonal. For example the numerical solution
of partial differential equations mostly leads to tridiagonal (one dimensional
problems) and block tridiagonal (higher dimensional problems). Therefore these
classes of matrices have been extensively studied. A review of this topic is given by
Meurant [Meu] for symmetric matrices. One of the most important results is established
by Gantmacher and Krein [GK], who proved that the inverse of a symmetric
irreducible tridiagonal matrix is a so-called Green matrix which is given by two
sequences fu i
and fv i
g of real numbers. A similar result is established by Ikebe.
He proved that the inverse of a nonsymmetric irreducible tridiagonal matrix has a
similar structure as a symmetric one and is given by four sequences fu i
and fy i
g: We call such a matrix a generalized Green matrix.
Considering (block) tridiagonal matrices and their inverses there are different kinds
of problems. On one hand one wants to find explicit formulas for the inverses of
tridiagonal matrices and on the other hand one wants to characterize matrices
whose inverses are (block) tridiagonal. Here we continue considering tridiagonal
matrices and their inverses. We consider generalized Green matrices and show that
the inverse of a generalized Green matrix is tridiagonal. Thus, with Ikebe's result we
obtain a characterization of irreducible nonsingular tridiagonal matrices. Moreover
we characterizes tridiagonal M-matrices and tridiagonal symmetric positive definite
matrices in terms of the fu i
fy i
g. We explain the connection of zero
row sums and zero columns of a tridiagonal matrix and pairwise constant u
and y i .
It has been observed that for large classes of banded matrices the entries of the
inverse tend to zero as ji \Gamma jj becomes larger. The rate of decay is important to construct
sparse approximations of the inverse as preconditioners. In [D] and [DMS] it is
shown that the entries of the inverse of a symmetric positive matrix are bounded in
an exponentially decaying manner along a row or column. For nonsymmetric tridiagonal
M-matrices which are strictly diagonally dominant by rows and by columns
we establish that the entries of the inverse indeed decay along each row and column
away from the diagonal. We also give a bound for this decay. This result generalize
a result by Concus, Golub, and Meurant [CGM] for symmetric matrices.
For arbitrary tridiagonal M-matrices or positive definite matrices the entries of the
inverse do not decay a long a row in general. However, here we establish a sharp
decay rate related to two diagonal entries for the inverses of tridiagonal matrices.
This result can be proved easily and continues the number of elegant results on
tridiagonal matrices. For symmetric matrices the result is related to Vassilevski
[Vas] result for which Cauchy-Bunyakowski-Schwarz constants are used. Therefore
we establish a new proof of Vassilevski's result and obtain a new way to compute
or estimate Cauchy-Bunyakowski-Schwarz constants for tridiagonal matrices. Moreover
3we establish a decay rate for the inverses of banded M-matrices.
2 Inverses of tridiagonal matrices
One of the most important results on symmetric tridiagonal matrices is the result
by Gantmacher and Krein [GK] which describes the structure of the inverse of these
matrices.
Theorem 2.1 Let A 2 IR n;n be symmetric, irreducible and nonsingular. Then A is
tridiagonal if and only if A \Gamma1 is a Green matrix.
A Green matrices
is given by two sequences fu i
of
numbers such that
or
c i;j
Green matrices can be described more elegantly as the Hadamard product (elemen-
twise product) of a so-called type D and a flipped type D matrix, (see [Mar] and
For nonsymmetric tridiagonal matrices A 2 IR n;n with
c n\Gamma2 a
c
(2.
Ikebe [I] proved that the inverse of A has a similar structure as for the symmetric
case.
Theorem 2.2 Let A 2 IR n;n be irreducible and tridiagonal. If A is nonsingular then
there exist four sequences fu i
g, fv i
g, fx i
g, fy i
g such that A \Gamma1 =:
by
c i;j
In the following we discuss inverses of nonsymmetric tridiagonal matrices. Therefore
we define a generalized Green matrix related to Ikebe's result.
Definition 2.3 A matrix
is called a generalized Green matrix if
there exist four sequences fu i
g, fv i
g, fx i
g, fy i
g such that
c i;j
Next we give some properties of generalized Green matrices. For A 2 IR n;n let A ij
denote the (n \Gamma 1) \Theta (n \Gamma 1) matrix obtained from A by deleting the i-th row and
the j-th column. Moreover, for a generalized Green matrix define
d i;j
We then have
Theorem 2.4 Let A 2 IR n;n be a generalized Green matrix. Then
det
det A i;i+1
det A
d i+1;i
det A i+1;i
det A
det A
Proof. We first consider det A. The case obviously true. For
we do Gaussian elimination from the right, i.e. we multiply A from the right with
a b
with
Using the inductive hypothesis we obtain the first equality of (2.4). Gaussian elimination
from the left gives the second equality.
To obtain (2.5) we multiply A i;i+1 from the left with
l s;s
l
for
l
l s;t
we get
det A
det A
d i+1;i
Similarly we obtain det A i+1;i .
Next we first consider A i;j with multiplying A i;j from the right with
l s;s
l
for
l s;t
we get
A i;j
where ~
A 1 is a diagonal matrix whose last diagonal entry is zero. Thus, det A i;j
Similarly we obtain det A i;j
With Ikebe's theorem (Theorem 2.2) and Theorem 2.4 we immediately obtain the
following theorem
Theorem 2.5 Let A 2 IR n;n be irreducible and nonsingular. Then A is tridiagonal
if and only if A \Gamma1 is a generalized Green matrix.
Proof. The sufficient part of Theorem 2.5 is proved by Ikebe [I]. Now assume that
A \Gamma1 is a generalized Green matrix.
a i;j
det A j;i
det A
With Theorem 2.4 we get that A is tridiagonal. 2
Here we should mention that Barrett gives in [B] another characterization of tridiagonal
matrices.
Theorem 2.6 Let
. Then
C \Gamma1 is tridiagonal if and only if C has the triangle property, i.e.
c i;j
c i;k c k;j
c k;k
for all
Note that none of these theorems includes the other. In Theorem 2.5 there is a restriction
on irreducibility while in Theorem 2.6 there is a restriction of the diagonal
entries of the inverse. In [A] Asplund gives a characterization in terms of vanishing
certain minors of the inverse. However the beauty of Ikebe's result and Theorem 2.5
is the explicit form of the inverse which can be used to establish more properties of
the inverse of a tridiagonal matrix.
Next we will characterize nonsingular tridiagonal M-matrices. A matrix
is called a nonsingular M-matrix if a i;j
nonnegative ma-
trix. We characterize nonsingular tridiagonal M-matrices in terms of the sequences
fy i
which gives A \Gamma1 . To do so we define for
Theorem 2.7 Let A 2 IR n;n be irreducible and nonsingular. Then the following are
equivalent:
1. A is a tridiagonal M-matrix,
2. A \Gamma1 is a generalized Green matrix where all have the same sign
and
Proof.
is given by
a
Thus Theorem 2.4 yields to
a i;i+1
a i+1;i
If A is an M-matrix we have
Therefore
Hence ff i
If ff i
Since
for all i, i.e.
we obtain
Therefore
A \Gamma1 is a nonnegative matrix we then obtain that A is an M-matrix. 2
Corollary 2.8 Let A 2 IR n;n be symmetric irreducible and nonsingular. Then the
following are equivalent:
1. A is a tridiagonal M-matrix,
2. A \Gamma1 is a Green matrix given by the sequences fu i
and fv i
g where all
and have the same sign and
In [MNNST] another characterization of tridiagonal M-matrices is given. There it is
used that for each nonsymmetric irreducible tridiagonal M-matrix A there exists a
diagonal matrix D such that DA is symmetric. Here we can avoid this symmetrization
For symmetric positive definite matrices we immediately obtain the following cha-
racterization
Corollary 2.9 Let A 2 IR n;n be a nonsingular symmetric tridiagonal matrix. A is
positive definite if and only if there exists a diagonal matrix
with jd ii
such that
(DAD)
where all u i and v i in (2:8) have the same sign and
These characterizations of tridiagonal M-matrices and tridiagonal positive definite
matrices will be useful in the next section.
3 Decay rates
In this section we consider the decay of the elements of the inverse of tridiagonal and
banded matrices. Several papers already established results on this topic. In [D] and
[DMS] it is shown that the entries of the inverse of a symmetric positive matrix are
bounded in an exponentially decaying manner along a row or column. Here we will
give some sharp decay rates and we will show that the entries of a nonsymmetric
tridiagonal diagonally dominant matrix indeed decay along a row and column.
is diagonally dominant by columns if
j for all i:
If
j for all i:
then B is called diagonally dominant by rows. If the above inequalities are strict
then B is called strictly diagonally dominant by columns or by rows respectively.
For A as in (2.2) being a symmetric tridiagonal M-matrix which is diagonally dominant
by columns (with a 1 ? b 1 and a n ? b Meurant proved in
[CGM] that the sequence fu i
g is strictly increasing while fv i
g is strictly decreasing.
Thus the entries of A \Gamma1 indeed strictly decay along each row or column. Moreover
they proved that
A is diagonally dominant one has ae ! 1.
We will generalize this result to the nonsymmetric case. In the following we also
assume for simplicity that A is a tridiagonal Z-matrix, i.e.
a 1 \Gammab 1
\Gammac 1 a 2 \Gammab 2
\Gammac n\Gamma2
a
\Gammab
\Gammac
0:
The entries of the sequences fu i
and fy i
which give A \Gamma1 , can be computed
as follows
Lemma 3.1 Let A 2 IR n;n be an irreducible tridiagonal Z-matrix. Then
a
c
a i
Proof. Multiplying the i-th row of A with the (i 1)-th column of A \Gamma1 gives the
Multiplying the i-th row of A \Gamma1 with the (i \Gamma 1)-th column of A gives the x i .
Multiplying the i-th row of A with the i-th column of A \Gamma1 gives the v i
. Moreover,
we use that
For the next theorem we defineae 1
a i
a i
Theorem 3.2 Let A 2 IR n;n be an irreducible tridiagonal M-matrix.
If A is diagonally dominant by rows and if a 1 ? b 1 and a n
then fu i
g is
strictly increasing while fy i
g is strictly decreasing. Moreover,
ae (j \Gammai)
If A is diagonally dominant by columns and if a 1 ? c 1 and a n ? b
then fx i
g is
strictly increasing while fv i
g is strictly decreasing. Moreover,
ae (j \Gammai)
Proof. First assume that A is diagonally dominant by rows and a 1
It is clear that . By the induction hypothesis and Lemma 3.1 we get
a
To obtain the disered result for the v i
we have to establish a recursive formula.
Multiplying the i-th row of A \Gamma1 with the i-th column of A gives
a
Multiplying the (i 1)-th row of A \Gamma1 with the i-th column of A gives
\Gammab
Multiplying the (i 1)-th row of A with the (i 1)-th column of A \Gamma1 gives
y i+2
Hence
and
c
We then have y
and by induction y i
. The decay rates follow immediately
from the recursive formulas for the u i and y i and the special structure of A \Gamma1 .
Now, assume that A is diagonally dominant by columns a 1 ? c 1 and a n ? b
Obviously, By the induction hypothesis and Lemma 3.1 we get
a
x
x i\Gamma2
c
a
c
x
Again, for the v i
we have to establish a recursive formula. Multiplying the i-th row
of A with the (i 1)-th column of A \Gamma1 gives
a i
Multiplying the (i 1)-th row of A \Gamma1 with the (i 1)-th column of A gives
Together with Lemma 3.1 we get
and
We then have v and by induction Again, the decay results follow
immediately. 2
For matrices which are not M-matrices but satisfy the other assumptions of Theorem
3.2 we obtain that the sequences of the absolute values strictly increase or
decrease respectively.
Theorem 3.2 generalizes the result for symmetric tridiagonal matrices by Concus,
Golub, and Meurant [CGM]. In the following we specify the above result. We will
explain the connection of zero row and column sums of A and some pairwise constant
a j;i
Then we have
Theorem 3.3 Let A be an irreducible nonsingular tridiagonal matrix. Then there
exist indices s; t 2 f0; 1g such that
if and only if
Moreover, there exist indices ~ s; ~ t 2 f0; 1g such that
if and only if
s
Proof. The proof follows immediately from similar formulas for the u
and y i
as derived in Lemma 3.1 and in the proof of Theorem 3.2. 2
Note that in Theorem 3.3 we do not assume A to be diagonally dominant or positive
definite or to be a Z-matrix. For symmetric matrices Theorem 3.3 says that the first
and last pairwise constant u i and v i gives zero row sums for the first and last rows in
A and vice versa. Moreover, if r i
then the inverse of A can be partitioned as
where C 11 2 IR s;s and C 33 2 IR t+1;t+1 . Here C 11 is a flipped type D matrix while C 33
is a type D matrix. C 31 and C 13 are flat matrices, i.e. all entries of these blocks are
equal. Moreover the rows of C 12 and the columns of C 23 are flat. This structure is
illustrated in the next example.
Example 3.4
Combining Theorem 3.3 with Theorem 3.2 we get
Corollary 3.5 Let A be an irreducible tridiagonal M-matrix. If r
Then the sequences
and fy i
else
else.
If c i
s and c i
s;
Then the sequences fx i
and fv i
else.
For M-matrices or symmetric positive definite tridiagonal or banded matrices, which
are not diagonally dominant, the elements of A \Gamma1 do not decrease in general along
a row or column. This can be seen in Example 3.6.
Example 3.6
However we will establish a decay result for tridiagonal M-matrices and symmetric
positive definite matrices in the following. These results are based on the characterizations
of these matrices given in Theorem 2.7 and Corollary 2.9.
Theorem 3.7 Let A 2 IR n;n be a nonsingular irreducible tridiagonal matrix. Let
c i;i+k
Y
c i+k;i
Y
where the ff i
and fi i
are as in (2:7).
Proof. Obviously, c ii ; c i+k;i+k
and c i;i+k
are given by
c ii
c i;i+k
Moreover
Y
ff i+jA u i+k
Hence
Y
Similarly, we obtain (3.6). 2
In general one can not say anything about the ff i
and fi i
in (3.5) and (3.6). But
for M-matrices and for symmetric positive definite matrices we have the following
decay results
Corollary 3.8 Let A 2 IR n;n be an irreducible tridiagonal M-matrix. Then for
we have
Y
A (c i;i c i+k;i+k );
Proof. The proof follows immediately from Theorem 2.7 and Theorem 3.7. 2
Corollary 3.9 Let A 2 IR n;n be an irreducible tridiagonal matrix which is positive
definite. Then for A
we have
c i;i+k =@ k
Y
Proof. With Theorem 2.9 we have for a symmetric positive definite tridiagonal
matrix A that there exists a diagonal matrix D such that DA \Gamma1 D is given by (2:8)
with
Thus we have ff i
1 for all i. Since (3.5) of Theorem 3.7 is independent of multiplying
A from the left and the right with the same diagonal matrix, we obtain (3.8).Note that Corollary 3.9 also include the symmetric M-matrices. Since for M -
matrices ff i positive definite matrices,
(3.7) and (3.8) give a sharp decay result for the entries of the inverse of tridiagonal
matrices. Moreover, this results can be proved easily and elegant. The decay rates
are given in terms of A \Gamma1 . However, the next lemma and the next theorem will show
the relation of the ff i and fi i to some values determined directly from A.
For partition A as
A 11 A 12
A 21 A 22
where A 11 2 IR k;k and A 22 2 IR n\Gammak;n\Gammak , and splitt A into
, where D k
is
the block diagonal of A. If D k
is nonsingular we define
We then have
Lemma 3.10 Let A 2 IR n;n be tridiagonal and nonsingular. For
be as in (3:9). Moreover for A
~
~
c k+1;k#
If D k and ~
D k are nonsingular, then with st
we have
~
and
Here oe(T ) denotes the spectrum of the matrix T .
Thus Lemma 3.10 says the following: If we partition A as a 2 \Theta 2 block matrix then
the overlapping 2 \Theta 2 matrix of the block Jacobian matrix is equal to the negative
transposed Jacobian matrix of the related overlapping 2 \Theta 2 submatrix of A \Gamma1 .
Proof. We immediately get for -
22 ) 11 a k+1;k 0
Now consider A partioned as in (3.9). A \Gamma1 is given by
\Gamma(A=A
here (A=A 11 ) denotes the Schur complement of A, i.e. (A=A 11
11 A 21 .
We have
where
; is the last row of A \Gamma1
11 . Hence
\Gamma(A=A
a k;k+1
Thus
c k+1;k+1
Similarly we show that
c k;k
Thus
Equation (3.13) follows from (3.12) and the special structure of J . 2
We then obtain
Theorem 3.11 Let A 2 IR n;n be an irreducible tridiagonal M-matrix and let A
s be as in (3:10). Then
i+l
Proof. For all s equation (3.11) gives
~
~
thus
(ae( ~
s
~
Hence with Lemma 3.10 we obtain
s
:Moreover for symmetric positive definite matrices we obtain
Corollary 3.12 Let A 2 IR n;n be tridiagonal symmetric positive definite and let
ae i+l
A similar result is proved by Vassilevski [Vas], (see also [A] p.368). Vassilevski proved
for tridiagonal symmetric positive definite matrices A and their inverses
that
c i;i+k
c i+k;i+k
where
(v T A 11 vw T A 22 w) 1
and A is partitioned as in (3.9) for all s. The constants fl s
are known as the Cauchy-
Bunyakowski-Schwarz constants. Since A is positive definite, fl s ! 1. Originally in
[Vas] the result (3.14) is obtained from a more general result for symmetric positive
definite block tridiagonal matrices. There norms of the blocks are compared and
the equality in (3.14) becomes an inequality (-).
The proof given in [Vas] is very long and technical, at least for the point case. But
it is easy to prove that the ae s in Corollary 3.12 and the fl s in (3.15) are the same.
Thus, our approach for the nonsymmetric case gives even for the symmetric matrices
a much simpler proof of Vassilevski result.
Moreover Theorem 3.12 and Lemma 3.10 give another way to compute or estimate
the Cauchy-Bunyakowski-Schwarz constants.
Using comparison theorems for regular splittings we obtain from Theorem 3.12 and
Theorem 3.11
Corollary 3.13 Let A 2 IR n;n be tridiagonal and irreducible. Let A
A is an M-matrix
then
c i;i+k c i+k;i
If A is symmetric positive definite then
c i;i+k
Proof. The splittings
of (3.9) are regular splittings, i.e. D \Gamma1
and N k
are nonnegative matrices. The same holds for the splitting
we have
- N for all k;
With Varga's comparison theorem for regular splittings ([Var] p. 90) we obtain
Since (3.16) is independent of scaling we obtain the result easily for symmetric
positive definite matrices.
Note that in both cases ae ! 1. Thus the spectral radius of the Jacobi iteration
matrix is an upper bound for the decay of the entries of the inverse of a tridiagonal
M-matrix or symmetric positive definite matrix.
In the following we will see that this is also true for banded M-matrices. A matrix
called
For M-matrices A the eigenvalue - of smallest absolute value is real and the corresponding
eigenvector u is positive. Thus we can find a nonsingular diagonal matrix
where e is the vector of all ones. We then have
Theorem 3.14 Let A be a 2p
for any s; t with s 2
s
s
here A.
Proof. Since
We first assume that s ? t. To find the t-th column of C we have to solve
is the zero vector except the t-th entry which is 1. To do so we consider
with J := D \Gamma1 N
If we define "
Thus
The last equality follows from (3.18) and jjxjj
(see [FP]). With (3.19) we
obtain for
l
and for
l
Now let
~
where ~
. Similar we partition x (k) and x. Then
x
Thus
s
Similarly we obtain by solving x T
s
Similarly we prove the case s ! t. 2
Theorem 3.14 can be easily extended to H-matrices A, i.e. matrices for which the
comparison matrix M(A) with
is an
M-matrix. The decay rate is then the spectral radius of the jacobi matrix of M(A).
Moreover Theorem 3.14 can be also formulated for sparse matrices, not only banded
matrices, using the notation used by Meurant in [Meu].
We immediately get from Theorem 3.14
Corollary 3.15 Let A be a 2p
for any s; t with s 2
c s;t
c t;s
c t;t
s;s
Corollary 3.16 Let A be a 2p
[c s;t ]. Then for any s; t with s 2
c s;t
c s;t
The advantage of the Corollaries 3.15 and 3.16 compared with a Theorem by Meu-
rants ([Meu] Theorem 4.13) is that we just need ae and diagonal entries of A \Gamma1 to
estimate the decay.
Acknowledgement
The author thanks Ludwig Elsner and Shmuel Friedland for helpful
comments. An earlier version of this paper in which symmetric matrices are
considered, appeared as Prepint 96 - 025 of the Sonderforschungsbereich 343, Uni-
versit-at Bielefeld.
--R
Cambridge University Press
Block preconditioning for the conjugate gradient method
Inverses of band matrices and local convergence of spline projections
Decay rates for inverses of band matrices
Nonnegative matrices whose inverses are M-matrices
A Review on the inverse of symmetric tridiagonal and block tridiagonal matrices
Matrix Iterative Analysis
On Some Ways of Approximating Inverses of Banded Matrices in Connection with Deriving Preconditioners Based on Incomplete Block Factorizations
--TR
--CTR
Jitesh Jain , Stephen Cauley , Cheng-Kok Koh , Venkataramanan Balakrishnan, SASIMI: sparsity-aware simulation of interconnect-dominated circuits with non-linear devices, Proceedings of the 2006 conference on Asia South Pacific design automation, January 24-27, 2006, Yokohama, Japan | inverses of tridiagonal matrices;tridiagonal matrices;decay rates |
316687 | loops in almost linear time. | Loop identification is an essential step in performing various loop optimizations and transformations. The classical algorithm for identifying loops is Tarjan's interval-finding algorithm, which is restricted to reducible graphs. More recently, serveral people have proposed extensions to Tarjan's algortihm to deal with irreducible graphs. Havlak presents one such extension, which constructs a loop-nesting forest for an arbitrary flow graph. We show that the running time of this algorithm is quadratic in the worst-case, and not almost linear as claimed. We then show how to modify the algorithm to make it run in almost linear time. We next consider the quadratic algorithm presented by Sreedhar et al. which constructs a loop-nesting forest different from the one constructed by Havlak algorithm. We show that this algorithm too can be adapted to run in almost linear time. We finally consider an algorithm due to Steensgaard, which constructs yet antoher loop-nesting forest. We show how this algorithm can be made more efficient by borrowing ideas from the other algorithms discussed earlier. | INTRODUCTION
Loop identification is an interesting control flow analysis problem that has several
applications. The classical algorithm for identifying loops is Tarjan's interval
finding algorithm [Tarjan 1974], which is restricted to reducible graphs. More re-
cently, several people have proposed extensions to Tarjan's algorithm to deal with
irreducible graphs. In this paper, we study and improve three recently proposed
algorithms for identifying loops in an irreducible graph.
The first algorithm we study is due to Havlak [1997]. We show that the running
time of this algorithm is quadratic in the worst-case, and not almost-linear as
claimed. We then show how to modify the algorithm to make it run in almost
Author's address: G. Ramalingam, IBM T.J. Watson Research Center, P.O. Box 704, Yorktown
Heights, NY, 10598, USA. E-mail: rama@watson.ibm.com
Permission to make digital/hard copy of all or part of this material without fee is granted
provided that the copies are not made or distributed for profit or commercial advantage, the
ACM copyright/server notice, the title of the publication, and its date appear, and notice is given
that copying is by permission of the Association for Computing Machinery, Inc. (ACM). To copy
otherwise, to republish, to post on servers, or to redistribute to lists requires prior specific
permission and/or a fee.
c
G. Ramalingam
linear time.
We next consider the quadratic algorithm presented by Sreedhar et al. [1996]
which constructs a loop nesting forest different from the one constructed by the
Havlak algorithm. We show that this algorithm too can be adapted to run in
almost linear time.
In the final section, we present yet another loop nesting forest defined by Steensgaard
[1993], and discuss how aspects of the Sreedhar et al. algorithm can be
combined with Steensgaard's algorithm to improve the efficiency of Steensgaard's
algorithm.
2. TERMINOLOGY AND NOTATION
A flowgraph is a connected directed graph (V; E; START;END) consisting of a
set of vertices V , a set of edges E, with distinguished start and end vertices
We will assume, without loss of generality, that START
has no predecessors. We will denote the number of vertices in the given graph by
n and the number of edges in the given graph by m.
We assume that the reader is familiar with depth first search [Hopcroft and
Tarjan 1973] (abbreviated DFS) and depth first search trees. (See [Cormen et al.
1990], for example.) An edge x ! y in the graph is said to be a DFS tree edge if
x is the parent of y in the DFS tree, a DFS forward edge if x is an ancestor other
than parent of y in the DFS tree, a DFS backedge if x is a descendant of y in the
DFS tree, and a DFS cross edge otherwise. (We will omit the prefix "DFS" if no
confusion is likely.) It is straightforward to augment DFS to compute information
that will help answer ancestor relation queries (of the form "is u an ancestor of v
in the DFS tree") in constant time. (See [Havlak 1997], for example.) We will refer
to the order in which vertices are visited during DFS as the "DFS order".
We also assume that the reader is familiar with the concepts of reducible and
irreducible flowgraphs. See [Aho et al. 1986] for a discussion of these concepts.
We denote the inverse Ackermann function by ff(i; j). The inverse Ackermann
function is a very slow growing function and may be considered to be a constant
for practical purposes. See [Cormen et al. 1990] for a discussion of this function.
There does not appear to be any single well-accepted definition what a loop
is. For certain irreducible flowgraphs, each of the three algorithms considered in
this paper will identify a different set of loops. The suitability of each of these
algorithms depends on the intended application. However, the following few facts
hold true for all these three algorithms. A loop corresponds to a set of vertices in
the flowgraph. If L x and L y are two loops identified by one of these algorithms,
then either L x and L y will be mutually disjoint or one will be completely contained
in the other. Hence, the nesting (or containment) relation between all the loops can
be represented by a forest, which we refer to as the loop nesting forest (identified
by the corresponding algorithm).
A vertex belonging to a loop is said to be an entry vertex for that loop if it has
a predecessor outside the loop.
Given a flowgraph, the algorithms described in this paper (conceptually) modify
the flowgraph as the execution proceeds. Thus, when we refer to the flowgraph, it
is worth remembering that we don't mean a fixed input flowgraph, but a flowgraph
that constantly changes during the course of the execution. The changes to the
Identifying Loops In Almost Linear Time \Delta 3
flowgraph, however, are not explicitly represented. Instead, a UNION-FIND data
structure is used to implicitly represent the changes in the flowgraph.
3. TARJAN'S ALGORITHM FOR REDUCIBLE GRAPHS
We begin with a brief description of Tarjan's loop nesting forest and his algorithm
to construct it. Consider a reducible graph. Every vertex w that is the
target of a backedge identifies a loop Lw with w as its header. Let Bw be the set
backedgeg. The loop Lw consists of w and all other vertices in the
graph that can reach some vertex in Bw without going through w. For any two
loops L x and L y , either L x and L y must be disjoint or one must be completely
contained in the other. Hence, the nesting (or containment) relation between all
the loops can be represented by a forest, which yields the loop nesting forest. This
provides the definition of Tarjan's loop nesting forest (for reducible graphs), and
let us now see how this forest can be constructed efficiently.
Tarjan's algorithm performs a bottom up traversal of the depth-first search tree,
identifying inner (nested) loops first. When it identifies a loop, the algorithm "col-
lapses" it into a single vertex. (If X is a set of vertices, then by collapsing X we mean
replacing the set of vertices X by a single representative vertex r X in the graph.
Any other vertex y is a successor or predecessor of r X in the collapsed graph iff y is
a successor or predecessor of some vertex in X in the original graph.) A vertex w
visited during this traversal is determined to be a loop header if it has any incoming
backedges. As explained above, let Bw be the set fzjz ! w is a backedgeg. The
children of w 1 in the loop nesting forest are identified by performing a backward
traversal of the collapsed graph, identifying vertices that can reach some vertex in
Bw without going through w. Once the children of w have been identified, w and
all its children are merged (collapsed) together into a single vertex that identifies
the newly constructed loop Lw . The traversal then continues on to the next vertex.
In the implementation, the collapsing of vertices is achieved using the classical
UNION-FIND data structure. (See [Tarjan 1983; Cormen et al. 1990].) Thus, the
outermost loops identified so far are each maintained as a set. The FIND operation
on any vertex x returns the header of the outermost loop containing x (or the
vertex x itself if it is not in any loop). A set of vertices is collapsed by performing
a UNION operation on all the vertices in the set. A complete description of the
algorithm in pseudo-code appears in Figure 1.
Let us analyze the complexity of this algorithm. Procedure findloop is invoked
exactly once for every vertex. Hence, line [8] is executed once for every vertex.
Lines [10]-[16] are executed at most once for every vertex y, when the innermost
loop containing y is identified. As a result, the total cost of executing lines [8] and
[10]-[16] is to perform at most one FIND operation per edge in the original graph.
Similarly, lines [3]-[4] are executed at most once for every vertex z, which costs 1
UNION operation. The whole algorithm performs at most n UNION operations and
at most m FIND operations, where n denotes the number of vertices in the graph
and m denotes the number of edges in the graph. Hence, the whole algorithm runs
Strictly speaking, we mean the children of the node representing the loop Lw . However, we
simplify matters somewhat by using the header vertex w to represent the loop Lw in the loop
nesting forest. We can do this here since a vertex is the header of at most one loop.
G. Ramalingam
[1] procedure collapse(loopBody, loopHeader)
[2] for every z 2 loopBody do
[3] loop-parent(z) := loopHeader;
[4] LP.union(z, loopHeader); // Use loopHeader as representative of merged set
[5] end for
[6] procedure findloop(potentialHeader)
potentialHeader is a backedge
while (worklist is not empty) do
[10] remove an arbitrary element y from worklist;
add y to loopBody;
[12] for every predecessor z of y such that z ! y is not a backedge do
[13] if (LP.find(z) 62 (loopBody [ fpotentialHeaderg [ worklist)) then
[14] add LP.find(z) to worklist;
[15] end if
[16] end for
[17] end while
[18] if (loopBody is not empty) then
[19] collapse (loopBody, potentialHeader);
procedure TarjansAlgorithm (G)
[22] for every vertex x of G do loop-parent(x) := NULL; LP.add(x); end for
[23] for every vertex x of G in reverse-DFS-order do findloop(x); end for
Fig. 1. Tarjan's algorithm for constructing the loop nesting forest of a reducible graph. LP is a
partition of the vertices of the graph. The function LP.add(z) initially places z in an equivalence
class by itself. The function LP.union(u, v) merges u's and v's classes into one, using v as the
representative element for the merged class. The function LP.find(z) returns the representative
element of z's equivalence class.
in time O((m+n)ff(m+n;n)), if UNION-FIND is implemented using the standard
path compression and union-by-rank techniques [Tarjan 1983; Cormen et al. 1990].
4. HAVLAK'S ALGORITHM
Havlak [1997] recently presented an extension of Tarjan's algorithm that handles irreducible
graphs as well. We show here that this algorithm is potentially quadratic,
even though Havlak describes the algorithm as being almost-linear. More precisely,
we show that the algorithm, in the worst-case, may take '(n 2 ) time, even for graphs
in which the number of edges is O(n).
Havlak's extension of Tarjan's algorithm modifies the loop body identification
step as follows. Given a vertex potentialHeader, the children of potentialHeader (in
the loop nesting forest) are identified by performing a backward traversal of the
collapsed graph, as before, but the traversal is restricted to the set of descendants of
potentialHeader in the DFS tree. In particular, lines [13]-[15] of Tarjan's algorithm
Figure
are modified so that these lines are executed only if z is a descendant
of potentialHeader in the DFS tree; if z is not a descendant of potentialHeader
in the DFS tree, then the edge z ! y is ignored and replaced by the edge z !
potentialHeader (in the collapsed flowgraph). (Note that in a reducible graph, z is
Identifying Loops In Almost Linear Time \Delta 5
a 1
a 2h
e
END
a k
Fig. 2. A counterexample illustrating that Havlak's algorithm may perform a quadratic number
of UNION-FIND operations. Solid lines indicate DFS tree edges, while dashed lines indicate the
remaining graph edges.
guaranteed to be a descendant of potentialHeader in the DFS tree.)
The last step described is precisely the source of the problem. It is possible for
a single edge z ! y to be processed multiple times, each time as an edge of the
is the header of a loop containing y. The example shown
in
Figure
this. Note that the vertices h k down to h 1 are targets of
backedges and, hence, identified as loop headers in that order. Their loop bodies
are also constructed in that order. The edge a k ! h k will be processed
as it is replaced successively by edges a k ! h i for every i ! k. Similarly, every edge
will be processed times. Thus, the algorithm will end up performing
operations in this example.
The above example presents a lower bound on the complexity of Havlak's al-
gorithm. This is also an upper bound for Havlak's algorithm. In particular, the
modified loop in lines [12]-[15] will perform at most n FIND operations. Since lines
[10]-[16] may be performed once for every vertex, the whole algorithm performs O(n)
UNION operations and O(n 2 ) FIND operations, which implies an upper bound of
on the running time of the algorithm. Since ff(n 2 ; n) is O(1) (see
[Tarjan 1983]), the upper bound simplifies to O(n 2 ).
6 \Delta G. Ramalingam
a 0
z
y
a 1
a j
a k
a
Fig. 3. Modifying Havlak's algorithm to run in almost linear time. y ! a 0 is an edge in the
flowgraph. a 1 , \Delta \Delta \Delta, a k (shown as bold vertices) are the ancestors of a 0 in the DFS tree that are
identified as loop headers by Havlak's algorithm. z is the least common ancestor of a 0 and y
(in the DFS tree). a j is a proper descendant of z, which is a descendant of a j+1 . In Havlak's
algorithm, the edge y ! a 0 will not be "used" while constructing the bodies of loops with headers
a 1 through a j . It will be used only in the construction of the loop body of a j+1 .
5. AN ALMOST LINEAR TIME VERSION OF HAVLAK'S ALGORITHM
We now describe a modification of Havlak's algorithm that does run in almost linear
time.
Given a vertex a 0 in a control flow graph, consider a 0 's ancestors in Havlak's loop
nesting forest. (See Figure 3.) In particular, for every i - 0, let a i+1 denote the
header of the innermost loop containing a i (in Havlak's loop nesting forest). Thus,
a 1 , a 2 , \Delta \Delta \Delta, is the sequence of loops containing a 0 , from innermost to outermost, as
identified by their headers. Note that each a i+1 must be an ancestor of a i in the
tree. Now, consider any edge y ! a 0 . Consider the largest j such that y is not
a descendant of a j in the DFS tree. In other words, y is a descendant of a j+1 but
not a j in the DFS tree.
Consider how Havlak's algorithm processes the edge y ! a 0 . For every
when the body of the loop with header a i is constructed, the edge y ! a
be considered. Since the source of the edge y is not a descendant of a i , the edge
will be replaced by edge y ! a i . Finally, when the body of the loop with header
a j+1 is constructed, the edge y ! a j will appear to be a "proper edge", and vertex
y will be added to the loop body.
What would be desirable is to replace the edge y ! a 0 by the edge y ! a j in one
step instead of in j steps. It turns out that we can do this. Let z denote the least
common ancestor of a 0 and y in the DFS tree. Note that z must lie between a j
and a j+1 (in the DFS tree). Consider the moment when Havlak's algorithm visits
z in the bottom up traversal of the DFS tree. At this point, loops with headers
a 1 through a j have been identified, and the loop with header a j+1 is yet to be
Identifying Loops In Almost Linear Time \Delta 7
[1] procedure markIrreducibleLoops(z)
[3] while (t 6= NULL) do
[5] mark u as irreducible-loop-header;
[8] end while
procedure processCrossFwdEdges(x)
[10] for every edge y ! z in crossFwdEdges[x] do
[11] add edge find(y) ! find(z) to the graph;
[13] end for
[14] procedure ModifedHavlakAlgorithm (G)
[15] for every vertex x of G do
[16] loop-parent(x) := NULL; crossFwdEdges[x] := fg;
[18] end for
[19] for every forward edge and cross edge y ! x of G do
[20] remove y ! x from G and add it to crossFwdEdges[least-common-ancestor(y,x)];
[21] end for
[22] for every vertex x of G in reverse-DFS-order do
[23] processCrossFwdEdges(x);
[24] findloop(x); // Procedure findloop is the same as in Figure 1
[25] end for
Fig. 4. The modified version of Havlak's algorithm. RLH is a second UNION-FIND data structure
used to map loop headers to the header of the innermost reducible loop containing them.
constructed. A FIND operation on a 0 will return a j at this stage. This suggests
the following algorithm.
In an initial pass, we remove every cross edge and forward edge y ! x in the
graph and attach it to a list associated with the least common ancestor of y and
x. We can do this in almost-linear time (see [Tarjan 1979] or [Cormen et al. 1990,
problem 22-3]). We then run Havlak's algorithm, modified as follows. Whenever
the main bottom-up traversal visits a vertex w, it processes the list of cross/forward
edges associated with it (by the first pass) and adds the edge F IND(y) !
F IND(x) to the graph. (It is immaterial whether we add the edge y ! F IND(x)
or the edge F IND(y) ! F IND(x) to the graph.) The modified algorithm appears
in
Figure
4. Note that this modification implies that we can use procedure findloop
of Tarjan's algorithm unchanged.
The modified algorithm runs in almost linear time and constructs the same loops
and loop nesting forest as Havlak's algorithm. However, it is not quite complete yet.
In addition to constructing the loop nesting forest, Havlak's algorithm also marks
loops as being reducible or irreducible. It is not as straightforward to distinguish
reducible loops from irreducible loops in the modified algorithm described above.
We now show how this extra piece of information can be computed, if desired.
Consider the example in Figure 3. The presence of the edge y ! a 0 means that
G. Ramalingam
the loops with headers a 1 through a j are irreducible. Hence, when our algorithm
replaces edge y ! a 0 by the edge F IND(y) ! a j , as explained above, we need
to mark the loop headers a 1 through a j as being irreducible. Procedure markIrre-
ducibleLoops of Figure 4 does this by walking up the loop-nesting tree containing
a 0 . If we do this naively, as explained, the algorithm will end up being quadratic
again. We avoid such a quadratic behavior using the standard path compression
technique. In particular, consider lines [5]-[6] which mark a vertex u as an irreducible
loop header, and traverses up to its parent t. Let us say that this step
scans the loop-tree edge t. We utilize a second UNION-FIND data-structure
so that we scan every loop-tree edge at most once. In particular, the UNION operation
in line [7] ensures that the tree edge never be scanned again, since
the FIND operation in line [4] skips past all previously scanned edges. This is safe
since there is no reason to mark a vertex (as irreducible) again if it has already
been marked. The resulting algorithm runs in almost linear time.
6. THE SREEDHAR-GAO-LEE ALGORITHM
Sreedhar et al. [1996] present a different algorithm for constructing a loop nesting
forest. This algorithm utilizes the DJ graph, which essentially combines the control
flow graph and its dominator tree into one structure. We will, however, simplify our
discussion of this algorithm by using just the control flow graph and the dominator
tree instead of the DJ graph. Let level(u) denote the depth of node u from the
root of the dominator tree, with the root being at level 0. Let V i denote the set of
vertices at level i (i.e., the set of vertices u such that
the maximum level in the dominator tree.
The Sreedhar et al. algorithm processes the vertices in the dominator tree bottom
up. In particular, each level l from p down to 1 is processed as follows. The first
step identifies all reducible loops at level l. All vertices at level l are scanned, and
any vertex n that has one or more incoming backedges whose source is dominated
by n is identified as the header of a reducible loop. The body of such a reducible
loop is identified just as in Tarjan's algorithm, traversing the graph backwards from
the sources of the backedges, identifying vertices that can reach these backedges
without going through n. The reducible loop is then collapsed into a single vertex,
just as in Tarjan's algorithm.
If any vertex n at level l has one or more incoming backedges whose source is
not dominated by n, then n is one of the entries to an irreducible loop. Once all
vertices at level l have been processed to identify reducible loops, we construct the
irreducible loops of level l. (We do this only if some vertex n at level l has one
or more incoming backedges whose source is not dominated by n.) This requires
processing the subgraph of the (collapsed) flowgraph consisting of all vertices at
level greater than or equal to the current level l (that is, the set of vertices S
to identify its strongly connected components (SCCs). Each non-trivial strongly
connected component of this graph is an irreducible loop of level l, and is collapsed
to a single vertex. By a "non-trivial SCC", we mean a SCC consisting of more than
one vertex.
We now establish a property of the loops identified by this algorithm, which will
be useful subsequently.
Identifying Loops In Almost Linear Time \Delta 9
Lemma 1. A vertex can be an entry vertex of at most one irreducible loop.
Proof. Note that any two irreducible loops (identified) at the same level l are
mutually disjoint. Hence any two such loops cannot have a common entry vertex.
Let L be an irreducible loop identified at level l. We show below that any entry
vertex of L must also be a vertex at level l. This immediately implies that irreducible
loops belonging to different levels cannot share a common entry vertex either, and
the lemma follows.
Let F denote the subgraph of the dominator tree consisting only of vertices at
a level greater than or equal to l. Thus, F is a forest consisting of subtrees of the
dominator tree. First note that the loop L must consist of vertices from at least
two different trees in F - a loop consisting of vertices from only one tree must be
a reducible loop of level l or a loop of level greater than l.
Let u and v be vertices belonging to different trees in F , and let w be the root
of the tree containing v. Then, any path (in the flowgraph) from u to v must pass
through w. (Otherwise, w would not be a dominator of v.) Hence, if the loop L
contains a vertex v, it must also contain the root of the tree in F that contains v.
let v be a vertex in loop L. Let w be the root of the tree containing v, and
assume that v 6= w. Then, any predecessor x of v (in the flowgraph) must also be
in the tree rooted at w. This is a straightforward property of the dominator tree.
It follows that x must also be in the loop L, since there is a path from x to a vertex
in the loop (namely v), and there is a path from a vertex in the loop (namely w)
to x.
This establishes that any vertex in loop L with a predecessor outside L must be
the root of a tree in F . But the roots of the trees in F are precisely the vertices at
level l. The result follows.
Sreedhar et al. show that the algorithm described above runs in time O(mff(m; n)+
km), where k is the number of levels at which the strongly connected component
algorithm had to be invoked. In the worst case, k can be O(n), resulting in a
quadratic algorithm.
The example in Figure 5 illustrates the source of the quadratic behavior in this
algorithm, which is the repeated application of the SCC algorithm. Consider the
processing done at level i for this example. This level contains an irreducible loop
consisting of the vertices b, c, d, and e. Constructing this irreducible loop requires
identifying the SCCs of the graph consisting of vertices a 1 through a k , b, c, d, e, and
vertices . Notice that vertices a 1 through a k and f 1 through f k are
visited, but they do not belong to any non-trivial SCC. When we similarly apply
the SCC algorithm at level (or at any lower level), we may end up visiting
vertices a 1 through a k and f 1 through f k again. In the worst case we may end up
visiting these vertices i times, resulting in the quadratic complexity.
We now show that a careful implementation of the SCC identification phase can
ensure that the algorithm runs in almost linear time. Observe that once the vertices
b, c, d, and e are collapsed into a single vertex, say L, representing the irreducible
loop, these vertices will never be visited again. (It is true that the edges a k ! b
may be visited later on; however, these edges actually represent the
edges in the collapsed graph, and the cost of visiting these
edges can be attributed to the cost of visiting vertex L.)
G. Ramalingam
a k k
f
ff
a 1
c
e
Level i-1
d
x
Level i
Fig. 5. An example illustrating the source of the quadratic behavior in the Sreedhar et al. algo-
rithm. Solid edges belong to both the control flow graph and the dominator tree, while the dashed
edges are control flow graph edges that are not in the dominator tree.
Our goal is to perform the irreducible loop construction at level i such that a
vertex x at a level j ? i is visited only if x belongs to some irreducible loop at level
i. We do this as follows.
Consider any strongly connected component. Consider the vertex u of the component
that is visited first during depth first search. Clearly, all other vertices in
the component will be descendants of this vertex in the DFS tree. Thus, if we
start with the set of incoming backedges of u, and traverse the graph backwards,
restricting the traversal to vertices that are descendants of u in the DFS tree we
can identify all the vertices belonging to u's strongly connected component without
visiting vertices not in u's SCC.
The above process is very similar to the one used by Havlak's algorithm (and
Tarjan'salgorithm) to identify the loop body corresponding to a potential header
vertex. However, note that if we apply the same process, but start from a vertex that
was not the first one in its SCC to be visited during DFS, then we will not identify
the complete SCC. Thus, while Havlak's and Tarjan's algorithm visit potential
header vertices in reverse DFS order, we do not want to visit vertices in that order.
Instead, we perform the irreducible loop construction at a level l by visiting the
set of vertices at level l in DFS order. If the visited vertex u belongs to an irreducible
loop (of level l) that has already been constructed, we skip the vertex and continue
on to the next vertex. Otherwise, if it has an incoming backedge, then it belongs
to an irreducible loop. The body of this loop is identified by traversing backwards
from the sources of all such backedges, restricting the traversal to descendants of u
in the DFS tree.
The modified algorithm appears in Figure 6. A few words of explanation: Both
Tarjan's algorithm and Havlak's algorithm identify at most one loop per header
vertex. This allowed us to represent a loop by its header vertex in the loop nesting
forest. However, the Sreedhar-Gao-Lee algorithm may identify up to two loops per
header vertex, a reducible loop and an irreducible loop. Consequently, we do not
use the header vertex itself to represent the loop in the loop nesting forest; instead,
Identifying Loops In Almost Linear Time \Delta 11
[1] procedure findloop(header, worklist)
if worklist is not empty then
[3] create a new vertex loopRep with same predecessors as header;
[5] while (worklist is not empty) do
[6] remove an arbitrary element y from worklist;
add y to loopBody;
[8] processed[y] := true;
[9] for every predecessor z of y do
[10] if (LP.find(z) is not a descendant of header in the DFS tree) then
loopRep to graph;
[12] elsif (LP.find(z) 62 (loopBody [worklist)) then
[13] add LP.find(z) to worklist;
[15] end for
[16] end while
[17] collapse (loopBody,loopRep);
[18] end if
[19] procedure ModifiedSreedharGaoLeeAlgorithm (G)
[20] for every vertex x of G do
[22] end for
[23] for to 1 do
[24] for every x 2 level[i] do
x is a backedge) and (x dominates y)
[26]
[27] end for
[28] for every x 2 level[i] in DFS-order do
[29] if (not processed[x]) then
x is a backedge) and not (x dominates y)
[33] end for
[34] end for
Fig. 6. The modified version of the Sreedhar-Gao-Lee algorithm
we use a new representative vertex to do this. Further, the algorithm does not
explicitly identify loops consisting of a single vertex, but can be modified to do so
if desired.
Note that we can construct, for each level, a list of all the vertices in that level in
DFS order easily enough. We just initialize all such lists to be empty, visit all vertices
in DFS order, appending the visited vertex to the end of the list corresponding
to its level.
Let us now analyze the complexity of the algorithm. Observe that lines [6]-[15]
get executed at most once for every vertex y, and that these lines perform at most
indegree(y) FIND operations. However, these lines are executed not only for the
vertices that exist in the original graph, but also for the vertices that are created in
line [3]. The vertices created in line [3] are representatives of loops in the collapsed
graph. Hence, the complexity of the algorithm depends on the number of such
G. Ramalingam
END
Fig. 7. An example illustrating that the Sreedhar et al. algorithm does not identify all loops
identified by the Steensgaard algorithm.
representatives created and their indegrees.
The created vertices fall into two categories: reducible loop representatives and
irreducible loop representatives. Every vertex h in the original graph is the header
of at most one reducible loop, which results in the creation of at most one reducible
loop representative r h , whose indegree is bounded by the indegree of h.
Every irreducible loop has two or more entry vertices (which are vertices of the
original graph), and the indegree of the representative of an irreducible loop is
bounded by the sum of the indegrees of its entry vertices. A vertex (in the original
can be the entry vertex of at most one irreducible loop. Hence, the sum of
the indegrees of all irreducible loop representatives is bounded by m, the number
of edges in the original graph.
As a result, the whole algorithm performs O(n) UNION operations and O(m)
FIND operations, resulting in a complexity of O(mff(m; n)).
7. STEENSGAARD'S LOOP NESTING FOREST
In this section, we consider yet another loop nesting forest, defined by Steensgaard
[1993]. We outline Steensgaard's algorithm for constructing this forest, as it also
serves as a constructive definition of this structure. Steensgaard identifies the loops
in a graph in a top down fashion, identifying outer loops first. The nontrivial
strongly connected components of the given graph constitute its outermost loops.
A vertex of a loop is said to be a generalized entry node of that loop if it has a predecessor
outside the strongly connected component. Any edge from a vertex inside
a loop to one of its generalized entry nodes is said to be a generalized backedge. The
"inner loops"contained in a given loop are determined by identifying the strongly
connected components of the subgraph induced by the given loop, after all its generalized
backedges are eliminated. This iterative process yields the loop nesting
forest.
Let us briefly consider the differences between the forest created by Steensgaard's
algorithm and the forest created by the Sreedhar-Gao-Lee algorithm. One differ-
ence, explained in [Sreedhar et al. 1996], is that the Sreedhar-Gao-Lee algorithm
Identifying Loops In Almost Linear Time \Delta 13
may identify some more reducible loops than Steensgaard's algorithm. If the extra
step in the Sreedhar-Gao-Lee algorithm to construct reducible loops is eliminated,
this difference disappears. However, it is also possible for the Sreedhar-Gao-Lee
algorithm to identify fewer loops than Steensgaard's algorithm does.
The problem is that it is not possible, in the Sreedhar-Gao-Lee forest, for one
irreducible loop to be nested inside another irreducible loop, if their entry vertices
are at the same level in the dominator tree. The example shown in Figure 7
illustrates this. In this example, the Steensgaard algorithm identifies an outer loop
consisting of vertices u, v, w and x, and an inner loop consisting of vertices w and x.
In contrast, the Sreedhar-Gao-Lee algorithm identifies only the one loop consisting
of u, v, w and x.
We now show that Steensgaard's loop nesting forest can be constructed more
efficiently by borrowing the ideas described in Section 6 and [Sreedhar et al. 1996].
We simply modify the irreducible loop construction phase of the algorithm described
in Section 6 as follows: instead of stopping after identifying the strongly connected
components, we use a Steensgaard-like algorithm to iteratively find other loops
nested inside the irreducible loop. In other words, instead of applying the strongly
connected components algorithm to the subgraph (of vertices at a level greater than
or equal to the current level), we apply Steensgaard's algorithm to the subgraph.
(Symmetrically, it is also possible to modify Steensgaard's algorithm by replacing
the use of a strongly connected components algorithm by the algorithm presented
in Section 6.)
The resulting algorithm has the same asymptotic worst-case complexity as Steens-
gaard's original algorithm, which is quadratic in the size of the graph. However, in
practice, it can potentially be more efficient than Steensgaard's original algorithm,
since the number of iterations Steensgaard's algorithm performs within a single
irreducible loop (identified by the Sreedhar et al. algorithm) is likely to be much
smaller than the number of iterations it would perform for the whole graph.
8. CONCLUSION
In this paper, we have examined three algorithms for identifying loops in irreducible
flowgraphs and shown how these algorithms can be made more efficient. As these
three algorithms construct potentially different loop nesting forests, the question
arises as to what the relative advantages of these different algorithms are.
Havlak's approach has the disadvantage that the set of loops found and the loop
nesting forest constructed are dependent on the depth-first spanning tree, which is
itself dependent on the ordering of the outgoing edges of every vertex. In particular,
what is represented as a single irreducible loop with k entry vertices in the Sreedhar-
Gao-Lee forest may be represented as k irreducible loops nested within each other
in some arbitrary order in Havlak's forest. Hence, we believe that the Sreedhar-
Gao-Lee loop nesting forest is more natural than Havlak's loop nesting forest.
However, (the modified version of) Havlak's algorithm is simpler to implement
than the (modified version of the) Sreedhar-Gao-Lee algorithm, since it does not
require the construction of the dominator tree. It would be a worthwhile exercise to
adapt Havlak's algorithm to directly construct the Sreedhar-Gao-Lee loop nesting
forest. We think that this can be done, but have not formally established this yet.
On the other hand, the Sreedhar-Gao-Lee loop nesting forest and Steensgaard's
14 \Delta G. Ramalingam
loop nesting forest are somewhat incomparable. As explained in Section 7, the ideas
behind both these approaches can be combined to construct a loop nesting forest
more refined than either one, but the resulting algorithm is more expensive than
the almost linear time variation we have presented for constructing the Sreedhar-
Gao-Lee forest. Whether this more refined forest is worth the increased algorithm
complexity depends on the intended application.
ACKNOWLEDGEMENTS
We thank John Field, V. Sreedhar and the anonymous referees for their helpful
comments.
--R
Introduction to Algorithms.
Nesting of reducible and irreducible loops.
Efficient algorithms for graph manipulation.
loops using DJ graphs.
Sequentializing program dependence graphs for irreducible programs.
Testing flow graph reducibility.
Applications of path compression on balanced trees.
Data Structures and Network Algorithms.
Revised September
--TR
Data structures and network algorithms
Compilers: principles, techniques, and tools
Introduction to algorithms
loops using DJ graphs
Nesting of reducible and irreducible loops
Applications of Path Compression on Balanced Trees
Algorithm 447: efficient algorithms for graph manipulation
--CTR
Fang Liu , Jacob J. Flomenberg , Devaka V. Yasaratne , Sule Ozev, Hierarchical Variance Analysis for Analog Circuits Based on Graph Modelling and Correlation Loop Tracing, Proceedings of the conference on Design, Automation and Test in Europe, p.126-131, March 07-11, 2005
G. Ramalingam, On loops, dominators, and dominance frontier, ACM SIGPLAN Notices, v.35 n.5, p.233-241, May 2000
Dominance analysis of irreducible CFGs by reduction, ACM SIGPLAN Notices, v.40 n.4, April 2005
On loops, dominators, and dominance frontiers, ACM Transactions on Programming Languages and Systems (TOPLAS), v.24 n.5, p.455-490, September 2002
Sebastian Unger , Frank Mueller, Handling irreducible loops: optimized node splitting versus DJ-graphs, ACM Transactions on Programming Languages and Systems (TOPLAS), v.24 n.4, p.299-333, July 2002
Gregor Snelting , Torsten Robschink , Jens Krinke, Efficient path conditions in dependence graphs for software safety analysis, ACM Transactions on Software Engineering and Methodology (TOSEM), v.15 n.4, p.410-457, October 2006 | irreducible flowgraphs;loops |
317048 | Similarity Measures. | AbstractWith complex multimedia data, wesee the emergence of database systems in which the fundamental operation is similarity assessment. Before database issues can be addressed, it is necessary to give a definition of similarity as an operation. In this paper, we develop a similarity measure, based on fuzzy logic, that exhibits several features that match experimental findings in humans. The model is dubbed Fuzzy Feature Contrast (FFC) and is an extension to a more general domain of the Feature Contrast model due to Tversky. We show how the FFC model can be used to model similarity assessment from fuzzy judgment of properties, and we address the use of fuzzy measures to deal with dependencies among the properties. | Introduction
Comparing two images, or an image and a model, is the fundamental operation for many Visual
Information Retrieval systems. In most systems of interest, a simple pixel-by-pixel comparison
won't do: the difference that we determine must bear some correlation with the perceptual difference
of the two images, or with the difference between two adequate semantics associated to the two
images.
Measuring meaningful image similarity is a dichotomy that rests on two elements: finging a set
of features which adequately encodes the characteristics that we intend to measure, and endowing
the feature space with a suitable metric. Since the same feature space can be endowed with an
infinity of metrics, the two problems are by no means equivalent, nor does the first subsume the
second.
In this paper we consider the problem of measuring dissimilarities in feature spaces. In a number
of cases, after having selected the right set of features, and having characterized an image as a point
in a suitable vector space, researchers make some uncritical and unwarranted assumption about
the metric of the space. Typically, the feature space is assumed to be Euclidean.
We set to analyze alternatives to this assumption. In particular, we will analyze some similarity
measures proposed in the psychological literature to model human similarity perception, and will
show that all of them challenge the Euclidean distance assumption in non trivial ways.
We will consider the problem of (dis)similarity measurement, as opposed to matching . Matching
and dissimilarity measurement are not seldom based on the same techniques, but they differ in
emphasis and applications. Matching techniques are developed mostly for recognition of objects
To appear on IEEE Transactions on Pattern Analysis and Machine Intelligence, 1999
under several conditions of distortion [22]. Similarity measures, on the other hand, are used in
applications like image databases, in which the query image is just a very partial model of the
user's desires and the user looks for for images similar, according to some defined criterion, to it [1].
In query by example, the user selects an image, or draws a sketch, that reminds her in some way of
the image she wants to retrieve. Images similar to the example according to the given criteria are
retrieved and presented.
In a typical matching application, we expect a comparison to be successful for images very
close to the model, and unsuccessful for images different from the query. The degree of similarity
of images different from the model is of no interest to us, as long as it remains below a suitable
acceptance threshold. On the other hand, database applications require a similarity measure that
will accurately predict perceptual similarity for all images "reasonably" similar to the query.
This paper presents and analyzes various definitions of similarity measures for feature spaces.
We will specifically consider the determination of similarity between images, but the measures that
we present apply in more general situations. It is obviously impossible to decouple the choice
of the similarity measure from the choice of features. In this paper, however, we will leave the
features in the background. There is an extensive literature that deals with the choice of features
for most problems of interest, and to which we refer the reader [8, 9]. We are interested in finding
characteristics of the distance measure that are relatively independent of the choice of the feature
space.
This paper is organized as follows. Section 2 is an overview of psychological models of similarity.
Section 3 introduces our Fuzzy Feature Contrast model, which is the extension of one of the
psychological models from Section 2. Section 4 presents some evaluation of the model. Conclusions
are drawn in Section 5.
Theories
In this section, we present some results on human similarity judgment introduced by psycholo-
gists, and discuss merits and flaws of the various approaches. We try to put all these theories in
perspective, and collect them in a unified framework.
The most important concept to do so is that of geometric distance and the related distance
axioms. Theories differ in the way they deal with the properties of geometric distance, and by the
number and nature of distance axioms they accept or refuse. The next subsection discusses the
distance axioms from the perspective of similarity measurements.
2.1 The metric axioms
A number of similarity measures proposed in the literature explain similarity (or, more properly,
dissimilarity) as a distance in some suitable feature space 1 , that is assumed to be a metric space.
A distinction is made between perceived similarity, d, and judged similarity, ffi [2]. If A and B
are the representations of the stimuli a and b in the feature space, then d(A; B) is the perceptual
1 The (metric or otherwise) space in which the stimuli are represented is referred to using a number of different
names, not necessarily equivalent, from perceptual space to psychological space. We will adhere to the generic name
feature space.
distance between the two, while the judged distance is
being a suitable monotonically non-decreasing function of its argument. Note that only the judged
distance ffi is accessible to experimentation.
Stimuli are represented as points in a metric space, and d(A; B) is the distance function of this
space ([20, 24].) This model postulates that the perceptual distance d satisfies the metric axioms,
the empirical validity of which has been experimentally challenged by several researchers.
The first requirement for a distance function is that
for all stimuli (constancy of self-similarity.) This hypothesis can be tested using the judged sim-
ilarity, since it implies ffi(A; B). The constancy of self-similarity has been refuted by
Krumhansl [11].
A second axiom of the distance model is minimality:
again, this hypothesis is open to experimental investigation since, due to the monotonicity of the
relation between d and ffi, it implies ffi(A; B) - ffi(A; A). Tversky [25] argued that this assumption
is violated in some recognition experiments.
A third axiom states that the distance between stimuli is symmetrical:
Just as in the previous cases, this axiom is subject to experimental investigation, since it implies
number of investigators have attacked this assumption with direct similarity
experiments [16] and observing asymmetries in confusion matrices [17]. This phenomenon has been
often attributed to the different "saliency" or "goodness of form" of the stimuli. In general, the
less salient stimulus is more similar to the more salient (more prototypical) than the more salient
stimulus is similar to the less salient [25].
The final metric axiom is the triangle inequality:
Epistemologically, this is the weakest axiom. The functional relation between d and ffi does not
guarantee that satisfaction or violation of the triangular inequality for d will translate into a similar
property for ffi.
The ordinal relation between distances is invariant with respect to all the transformations of
the type (1) if g is monotonically increasing. A consequence of this is that the triangular inequality
cannot be tested based on ordinal measurements only. It is however generally acknowledged that
at least for some types of stimuli the triangular inequality does not hold [2, 26].
Tversky and Krantz [27] proved that, if the distance axioms are verified, and the distance is
additive along straight lines in the feature space, then d is a Minkowski distance, that is, a distance
of the form:
BN g, and p ? 0 is a constant which characterizes the
distance function.
From these notes, it seems like the situation for geometric models is quite desperate: of the four
basic axioms of the distance function, two are questionable, one is untenable, and the fourth is not
ascertainable.
In spite of these problems, metric models are widely used in psychology, with some adjustments
to account for the failure of the distance axioms.
2.1.1 The debatable Euclidean nature of Perception
In a very influential 1950 paper [3], Fred Attneave investigated the perception of similarity among
a group of rectangles that were allowed to change along two dimensions: area and tilt. The results
were inconsistent with the Euclidean model of distance, but partial agreement was found with a
city-block distance model of the type
where the two dimensions of the feature space represent area and tilt angle. Attneave found some
discrepancy in the predictions of the model, which he attributed to nonlinearities in the feature
space.
An important class of metric models was introduced by by Thurstone [23] and Shepard [21].
Shepard's model is based on generalization data 2 : given a series of stimuli S i and a corresponding
series of learned responses R i , the similarity between S i and S j (in the absence of any bias) is
related to the probability that the stimulus S i elicit the response associated with stimulus
Shepard does not work directly with these quantities, but uses normalized and symmetric generalization
data, defined as:
The model assumes that the generalization data are generated as:
where g is the generalization function, and d is a suitable perceptual distance between the two
stimuli.
Shepard assumed that there exists, for each type of stimulus, a suitable underlying feature space
such that (1) the function g is universal (it has the same form for all the types of stimuli), and
(2) the function d is a metric. Note that, without the second requirement, the condition can be
trivially satisfied for any monotonically decreasing function
2 The term generalization is used here in a slightly different way than in most Machine Learning papers. In ML,
generalization means usually a correct inference whereby the response appropriate in a given situation is extended to
cover other situations for which that response is suitable. In Shepard's papers, generalization refers to the incorrect
extension of a response from the stimulus for which it was intended to other similar stimuli.
If we assume that the function g is monotonic, then from the generalization data g ij it is possible
to derive the ordering of the stimuli in the perceptual space with respect to any arbitrary reference.
Shepard uses ordering data, and nonmetric multidimensional scaling [24] to determine the lowest
dimensional metric space that can explain the data. He assumes this space as the feature space
for the model. There is good agreement with the experimental data if the feature space has a
Minkowski metric (defined in (6)), and the generalization function is exponential:
One important observation, at the core of Shepard's 1987 paper [21] is that, given the right
feature space, the function g is universal that is, the same exponential behavior (with different
values of the parameter - in (10)) can be found in the most diverse situations, ranging from visual
stimuli to similarity of pitch in sounds.
A relevant qualitative characteristic of the model is that, as two stimuli grow apart in the feature
space, the dissimilarity 1 \Gamma g(d) does not increase indefinitely, but it flattens out to a finite limit.
A detailed discussion of of the properties of the Thurstone-Shepard model can be found in [7].
2.2 Abandoning the distance axioms
The distance axioms seem to provide an unnecessarily rigid system of properties for similarity
measures. In particular, it seems epistemologically futile to impose on the perceptual distance d
some properties-like the triangle inequality-that may fail to translate into similar properties of
the judged similarity ffi and are therefore beyond experimental validation. We propose the following
definition regarding the epistemologically valid properties for perceptual distance functions:
D be the class of monotonically increasing functions from IR to IR. A logic
predicate P over the distance functions d is an ordinal property if, for all
Tversky and Gati [26] identified three ordinal properties, and used them to replace the metric
axioms in what they call a monotone proximity structure. Suppose, for the sake of simplicity,
that the feature space has two dimensions x and y, and let d(x 1 y be the perceived distance
between the stimuli proximity structure is characterized
by three properties:
Dominance: i.e. the two dimensional dissimilarity
exceeds both one dimensional projections of that distance.
Consistency: for all x
and
that is, the ordinal relation between dissimilarities along one dimension is independent of the
other coordinate.
To introduce the third property, we give the following definition:
is said to be between x 1 and
Note that, in view of consistency, "betweenness" is well defined since it is independent of the
coordinate y that appears in the definition.
The third property of a monotone proximity structure is the following:
Transitivity: If x 1 jx 2 jx 3 and x 2 jx 3 jx 4 , then it is x 1 jx 2 jx 4 and x 1 jx 3 jx 4 .
This framework is more general than the geometric distance: while all distance measures have
dominance, consistency, and transitivity, not all the proximity structures satisfy the distance ax-
ioms. Dominance is a weak form of the triangle inequality that applies along the coordinate axes.
Consistency ensures that certain ordinal properties related to the ordering of the features x do not
change when y is changed (see [18] for details.) Transitivity ensures that the "in between" relation
behaves as in the metric model, at least when moving along the axes of the feature space.
Note that in the Euclidean model-which is isotropic-every property holds (or does not hold)
for a series of collinear points irrespective of the direction of the line that joins them. In measuring
the perceptual distance, the directions of the feature axes have a special status.
Most of the distance measures proposed in the literature, as well as the feature contrast model
predict that dominance consistency and transitivity hold.
To help discriminate among the different models, Tversky and Gati proposed a fourth ordinal
axiom, that they call the corner inequality. If x 1 jx 2 jx 3 and y 1 jy 2 jy 3 , the corner inequality holds if
and
or
and
From Fig. 1 it is easy to see that the corner equality holds if the "corner" path from x 1 y 1 to x 3 y 3
is longer than the diagonal path. Minkowski metrics satisfy the corner inequality, so observed
violations of the corner inequality would falsify models based on Minkowski metrics. Tversky and
Gati present evidence that, under certain conditions, experiments show violations of the corner
inequality, thus seemingly invalidating most geometric models of similarity.
2.2.1 Set-Theoretic Similarity
In a 1977 paper [25], Amos Tversky proposed his famous feature contrast model. Instead of considering
stimuli as points in a metric space, Tversky characterized them as sets of binary features.
In other words, a stimulus a si characterized by the set A of features that the stimulus possesses.
Equivalently, a feature set is the set of logic predicates which are true for the stimulus in question.
Let a, b be two stimuli, A, B the respective sets of features, and s(a; b) a measure of the similarity
between a and b. Tversky's theory is based on the following assumptions:
x yx y
x y
Figure
1: The corner inequality; the "corner" path x 1 longer than the path
is inside the rectangle.
Matching:
Monotonicity: A.
A function that satisfies matching and monotonicity is called a matching function. Let the
expression F (X; Y; Z) be defined whenever there are A, B such that
exist X;Y; Z such that one or more of the following holds:
F (X; V;
The pairs of stimuli (a; b) and (c; d) are said to agree on one (two, three) components whenever
one (resp. two, three) of the following hold:
Based on these definitions, Tversky postulates a third property of the similarity measure:
Independence: Suppose the pairs (a; b) and (c; d), as well as the pairs (a
the same two components, while the pairs (a; b) and (a d) and (c
on the remaining (third) component. Then:
We refer to [25] for details. An example of independence is in Fig. 2. In this case, the independence
property states that if (a; b) are "closer" than (c; d), then (a
hypothesis, with some caveat about the selection of features, can be checked experimentally.
The main result of Tversky's paper is the following representation theorem:
a b a' b'
c d c' d'
Figure
2: An example of independence. If a and b are considered more similar than a 0 and b 0 , then
c and d will appear more similar than c 0 and d 0
Theorem 1 Let s be a similarity function for which matching, monotonicity and independence
hold. Then there are a similarity function S and a non-negative function f and two constants
This result implies that any similarity ordering that satisfies matching, monotonicity and independence
can be obtained using a linear combination (contrast) of a function of the common
features and of the distinctive features This representation is called
the contrast model.
This model can account for violation of all the geometric distance axioms. In particular, S(a; b)
is asymmetric if ff 6= fi. If S(a; b) is the answer to the question "how is a similar to b?" then, when
making the comparison, subjects focus more on the features of a (the subject) than on those of
b (the referent.) This corresponds to the use of Tversky's measure with ff ? fi: in this case the
model predicts
this implies that the direction of the asymmetry is determined by the relative "salience" of the
stimuli: if b is more salient than a, then a is more similar to b than vice versa. In other words,
the variant is more similar to the prototype than the prototype to the variant, a phenomenon that
confirmed experimentally. In addition, the feature contrast model accounts for violation
of the corner inequality.
3 Fuzzy Set-theoretic Measures
Tversky's experiments showed that the feature-contrast model has a number of desirable properties,
most noticeably, it explains violation of symmetry and of the corner equality.
One serious problem for the adoption of the feature-contrast model in visual information systems
is its characterization of features. In Tversky's theory, each stimulus is characterized by the presence
or absence of features. This convention forces Tversky to adopt complex mechanisms for the
representation of numerical quantities. For instance, positive quantities-such as a length-are
discretized into a sequence l i and represented as a collection of feature sets such that if l 1
. Quantities that can be either positive or negative are represented by
even more complex constructions.
In computer vision, the assumption of binary features would leave us with the problem of
evaluating logic predicates based on some continuous and noisy measurements, yielding brittle and
unreliable features.
In the next subsection we introduce the use of fuzzy predicates in the feature contrast model.
The use of fuzzy logic will allow us to extend Tversky's results to situations in which modeling by
enumeration of features is impossible or problematic.
Not all the stimuli influence similarity perception according to the same mechanism [2]. Tver-
sky's feature contrast model applies to a particular type of features: those can be expressed as
predicates over the stimuli domain. In this section we will consider only this type of features. A
unification of all types of stimuli in a geometric framework can be found in [19].
3.1 Fuzzy features contrast model
Consider a typical task in computer vision: assessing the similarity between faces. A face is
characterized by a number of features of different types but, for the following discussion, we will
only consider geometric features like the size of the mouth, the shape of the chin, and so on.
A predicate like the mouth of this person is wide can be modeled as a fuzzy predicate whose
truth is based on the measurement of the width of the mouth. For instance, we can measure the
width of the mouth x in Fig. 3.a and use two truth functions (see below) like those in Fig. 3.b to
determine the truth value of the predicates "the mouth is wide" and "the mouth is narrow."
"x is small"01
(a) (b)
"x is big"
x/a
x/a
a
x
Figure
3: Determination of the truth value of the predicates "the mouth is wide" and "the mouth
is narrow." The width of the mouth x is measured and normalized with respect to the distance
between the eyes a. Then, two membership functions are used to determine the truth value of the
two predicates.
In general, we have an image I and a number of measurements OE i on the image. We want to use
these measurements to assess the truth of n fuzzy predicates. Some care must be taken to define
the truth value of a fuzzy predicate. We use the following definition:
Let\Omega be a set, and OE
a set of measurements on the elements
of\Omega . Let
P! be a predicate about the element !
2\Omega . The truth of the predicate P! is
In the example above, for instance, we say that the truth value of the predicate "The mouth of
X is wide" depends on measurements of the face (viz. the measurement of the mouth width.)
From the measurements OE we derive the truth values of p fuzzy predicates, and collect them
into a vector:
We call -(OE) the (fuzzy) set of true predicates on the measurements OE. The set is fuzzy in that a
predicate belongs to -(OE) to the extent - j (OE).
In order to apply the feature contrast model to the fuzzy sets -(OE) and -(/) of the predicates
true for the measurements OE and /, we need to choose a suitable salience function f , and compute
the fuzzy sets -(OE) "
We assume that the saliency of the fuzzy set is given by its cardinality:
The intersection of the sets -(OE) and -(/) is defined in the traditional way:
The difference between two sets A and B is traditionally defined as maxf-A g. This
definition, however, leads to some undesired effects [18] that can be avoided by requiring that the
relation continue to hold in our fuzzy domain. A possible definition that makes the
relation true is:
With these definitions, we can write the Tversky's similarity function between two fuzzy sets
-(OE) and -(/) corresponding to measurements made on two images as:
The Tversky dissimilarity is defined as
We refer to the model defined by eq. (23) and (24) as the Fuzzy Features Contrast
It is easy to see that the fuzzy feature contrast model can be asymmetric (if ff 6= fi.) It is also
easy to find an example of violation of the corner inequality. Consider Fig. 1 with x
1. Let the membership function in the FFC model be
Then we have:
1Condition 13 is violated if fi ? fiand while condition 14 is violated if fi ? 3fi \Gamma1and
fi. Thus, the corner inequality is violated for fi ! 1.
A property similar to the representation theorem can be proven for the fuzzy case. Let y 2 IR
and
Then the following theorem holds:
Theorem 2 Let F : be an analytic function such that the following properties hold:
1. F (x; y; z) is monotonically nondeincreasing in x and monotonically nonincreasing in y and
z. The partial derivatives of F are nonzero almost everywhere.
2. For all
3. For all s, the sets ftjF are closed in the product topology of
4. F (y i
Then there are functions G
and F (x;
Proof (sketch)
By theorem 3 in [5], continuity and conditions 1-3 guarantee that F can be written as
with V monotonically increasing. Because of monotonicity, V is irrelevant for ordinal properties,
and F can be replaced by
~
Property 4 (which is analogous to Tverky's [25]) implies that, for i 6= j,
By the monotonicity properties of F , the derivatives of f i and f j have either the same sign or
opposite signs for all the values for which they are non zero. Assume, without loss of generality,
that they have the same sign. Also, since the derivatives are zero almost everywhere, then for
almost all x 1 and almost all y 1 it is possible to find x 2 and y 2 for which (30) holds. Considering
two sequences y implies, in the limit of n !1,
almost everywhere. By fixing y 1 , this implies that f 0
almost everywhere, so, by
continuity, f i (note that the form (29) implies that, if condition 4 holds for one
u, it holds for all u). The constants c i can be collected together and eliminated, since they are
irrelevant for ordering. fl
The complete proof of this theorem can be found in [18].
3.2 Feature Dependencies
Our translation of Tversky's measure suffers from a serious drawback: it considers all the features
as independent. For instance, in our model the truth of the statement "the mouth is wide" depends
only on the width of the mouth, and not on the other measurements. This independence property
is easily proved to be false for human perception. For instance, in the famous visual illusion of
(a)
(b)
Figure
4: A proof that the truth of a fuzzy predicate can depend on measures of quantities different
from the subject of the predicate: in this case, the truth of the predicate "the line is long" must
be different in the two cases, since the predicate "line A is longer than line B" has a truth value
different from zero. Yet, the length of the two lines is the same. Therefore, the truth of the
predicate depends on other measures.
Fig. 4, the line (a) appears longer than the line (b), although measurement reveals that the two
have the same length. This has important consequences for our fuzzy definition.
Let us assume that the the truth of the predicate "line A is longer than line B" is given by a
fuzzy inference rule like
If line A is long and line B is short, then line A is longer than line B.
We will use the following fuzzy implication rule: if we have two predicates "X is A," with a
truth value -A (X), and a predicate "Y is B," with a truth value -B (Y ), then the truth value of
the implication "if X is A then Y is B" is given by:
be the truth value of the predicate "line A is longer than line B," and -A , -B be the
truth values of the predicates "line A is long" and "line B is long" respectively. We have:
Since the predicate "line A is longer than line B" is perceived as true, we have
the implication is valid, therefore, -) ? 1=2. This implies
This relation must be true for all the values of -A . In particular, the effect is strong when the line
A is not judged neither "long" nor "short" that is, when 1=2. In this case, for the inequality
to be true, we must have -B ! 1=2 that is line B is perceived as shorter than line A.
This fact cannot be explained if the arguments of -A and -B are simply the length of the
respective lines. But since the length of a line can be judged when the line is presented in isolation,
the values -A and -B must be completely determined by the length of the respective lines.
We assume that the truth of each predicate is not affected by the truth of other predicates, but
the way the predicates interact is: if two predicates tend to be true together, they reinforce each
other. This model applies to the following situation: imagine you know the length of the segment
(a) in Fig. 4 (possibly its length relative to the whole figure); then you can express a judgment on
whether the predicate "segment (a) is long" is true. This judgment does not depend on the other
features on the image, and, if x is the length of the segment, it is has truth value - a
However, when the whole image is perceived, the length of the segment is perceived differently
depending on the presence or absence of other features (like the existence of outwardly pointing
diagonal segments.) We postulate that, although the truth of the predicate "the horizontal line is
long" is still the same, the measure of the set of true features is changed because of the interaction
between different predicates.
The latter model can be defined mathematically by replacing the function f in the definition
of the fuzzy feature contrast similarity with a fuzzy integral defined over a suitable fuzzy measure.
We use a Choquet Integral, and a fuzzy measure that models the interaction between the different
predicates [15].
be a finite universe. A fuzzy measure m is a set function
subsets A and B of X, A ' B )
indicates the power set of X that is, the set of all subsets of X.
be a fuzzy measure on X. The discrete Choquet integral of a function f :
with respect to m is defined as:
Z
where the notation : (i) means that the indices have been permutated so that
defined to be 0, and
Let X be the universe of fuzzy predicates x is a measurement vector, and
- i the truth function for the i-th predicate. Also, let f be the identity function us
suppose, for the ease of notation, that the predicates are already ordered so that
and let us define the dummy predicate - 0 that is always false, i.e. - 0
Lemma 1 The fuzzy cardinality of the set of true predicates is equal to n times the Choquet integral
of the identity function when m is additive and equidistributed (i.e. sets of the same cardinality
have equal
Proof:
is equidistributed, we have m(f- i because of additivity,
we have
therefore, the Choquet integral can be written as:
Z
=n
which, since - by definition, is the desired result. fl
Thus, when the measure is additive and equidistributed, the Cocquet integral reduces to the
cardinality of the fuzzy set, which is the saliency function we used in (23.) To see how we can use a
non-additive measure to model dependence between predicates, suppose that all the predicates are
independent except for - n\Gamma1 and - n . Assume that the fact that - n is true increases the possibility
that - n\Gamma1 be also true. Referring to Fig. 4, the two predicates might be:
diagonal lines point strongly outward."
horizontal line is long."
What is the effect of this dependency on the fuzzy measure? Since the perception of the outwardly
pointing diagonal lines increases the perception of the length of the line, the predicate P 2 is, in a
sense, "more true" due to the truth of P 1 . In terms of the fuzzy measure, we can say that it is:
where is a coefficient that models the dependence between the two predicates. Consider
an equidistributed measure:
with the dependency between x
Also, suppose that all the other measures are additive, that is
if either x n\Gamma1 or x n do not belong to fx i 1
g, and
if they do.
When we compute the Choquet integral, we order the predicates by their truth value. Suppose
that the value -(x n\Gamma1 ) is the h-th in the ordering, and that -(x n ) is the k-th
In this case, in the Choquet integral, there will be subsets that
contain both x n\Gamma1 and x n , therefore:
Z
In the following, we will assume a fuzzy measure of the form:
Y
The
uniquely characterize the measure, and must be
determined experimentally.
The fl parameters must let the measure satisfy the three requirements of definition 4. In
particular, the measure of a set must be greater or equal the measure of all its subsets. Let us
consider, without loss of generality, the two sets
Then we have:
Y
Y
By definition of fuzzy measure, it must be m(B) - m(A) and, therefore,
Y
Y
From this relation it is possible to derive the relation between fl A and
Y
In the case of an equidistributed measure (m(fx i
whenever A ' B.
Given a fuzzy measure m that takes into account the dependence among features, we can define
the Tversky similarity as:
Z
Z
Z
Both this measure and (23) reduce to the usual Tversky similarity if the features are binary
and the measure is additive and equidistributed.
In this section, we present a comparison between some of the similarity measures introduced so far.
We will consider the Euclidean Distance, the Attneave city-block distance, the Thurstone-Shepard
model, and the Fuzzy feature contrast model.
4.1 Similarity of Faces
In this experiment, we use the similarity measures to characterize the similarity between face-like
stimuli. Similarity of faces is a complex issue, that depend on a number of factors, like the color
and the shape of the hair, the texture of the skin, the geometry of the face components, and so on.
In this experiment, we have chosen a simplified approach, and we will determine similarity based
only on geometric measurements. The features are computed on simple image sketches like those
in Fig. 5. Our set consisted of ten such sketches. The reason for using these sketches rather than
full face images is the poverty of our feature set. Face images contain very important clues that
are not characterized by our geometric features (hair and skin color, etc.) These features tend to
bias the human judgment of faces, so it is impossible to compare the result of human judgment
with those of geomatric features in these conditions. Since we are evaluating similarity measures
and not features, and since the geometric features that we use are powerful enough to characterize
the face sketches that we use, we believe that in this case the simplification is epistemologically
justified.
Figure
5: Three face sketches used in our face similarity experiment.
a
c
d
e
Figure
These 5 measures are taken from a face image to provide support to the fuzzy predicates
used for the similarity assessment.
4.1.1 Distance Measures
The geometric measurements we derive from a face image are described in Fig. 6. All the measurements
are normalized dividing them by the distance between the eyes. These measurements
provide support for the 5 predicates of Tab. 1 (see also [4] for the rationale behind this choice.)
The predicates can be collected in a set of features, and used to compute Tversky similarity. The
Supporting quantity
Long face a
Long chin b
Long nose d
Large face e
Table
1: Predicates used for similarity evaluation, and measured quantities that support their
truth. All these measures are normalized with respect to the distance between the eyes.
FFC similarity model uses the truth value of the predicates, while metric distances are based on
the geometric measurements.
4.1.2 Method
The experiment was organized as follows. We selected 4 subjects with no knowledge of our activity
in similarity measures. Each subject was asked to rank 9 of the sketches (like those in Fig. 5) for
similarity with respect to the 10th (the "query" sketch.) The query sketch was chosen at random,
and each subject was asked to give a total of three rankings with respect to three different query
sketches. Each subject was also asked to divide the ranked images in three groups: the first group
consisted of faces judged "very similar" to the query, the second group consisited of faces judged
"not very similar" to the query, and the third of "completely different" faces. The reason for this
classification will be clear in the following. Whenever possible (for 2 subjects out of 4), the subject
was asked to repeat the experiment with the same query sketches after two weeks, to check for
stability.
The ordering given by any subject was compared with the orders obtained on the same sketch by
the Euclidean distance, the Attneave distance, the Thurstone-Shepard distance, and two versions of
the FFC distance: one without feature interaction and one with feature interaction. We compared
the orderings using the weighted displacement measures proposed in [6].
Assume that we have a query q which operates on a database of n images. We consider the
ordering given by the human subject as the "ground truth." Let L g be this ordering.
In addition, we have a measure of relevance 0 - S(I; q) - 1 such that, for the real order,
In our case, we use the categorization given by the subject as a relevance measure, and set S(I
0:8 for images "very similar" to the query, S(I images "not very similar" to the query,
and images "completely different."
Because of imperfections, the database is not giving us the ordering O t , but an order L
is a permutation of n. The displacement of I i in O d is defined
Table
2: Average (-) and variance (oe 2 ) of the weighted displacement for the 5 measures considered.
A: Attneave. E: Euclid. TS: Thurstone-Shepard. FFC1: Fuzzy Feature contrast without feature
interaction. FFC2: Fuzzy Feature contrast with feature interaction.
Table
3: The F ratio for the pairwise comparison of the similarity measures.
as d q j. The relative weighted displacement of L d is defined as
2c is a normalization factor. W q is zero if L
The results relative to the first subject were used to adjust the parameters of the distances.
For the Thurstone-Shepard model, the best results were obtained when the underlying Minkowski
metric had exponent 2. Since this coincides with the Euclidean distance we decided not to
optimize the Thurston-Shepard model, but to use 0:3 as a contrast to the other metric models.
For the FFC models, the best results were obtained with We also
introduced an interaction between the fatures "long face" and "large mouth", with ~
4.1.3 Results
The results relative to the other three subjects were used for comparison. For every ranking
provided by a subject, the ordering relative to the same query sketch was obtained using each of
the 5 similarity measures and the weighted displacement was computed. The results were then
averaged. Table 2 shows the average and the variance for the 5 similarity measures. In order to
estabilish whether the differences are significant, we performed an analysis of the variance, with
an hypothesis acceptance level . For the whole ensemble we obtained
which leads to the conclusion that the differences are indeed significative. In order to estabilish
which differences are significative, we computed the F ratio for each pair of distances. The results
are shown in Table 3. The difference between two measures should be considered significant if
the F value at the intersection of the respective row and column is greater than 4:75 (for the
determination of this value, see [10].)
3 Given the null hypothesis "all the measures provide the same result," means that we are accepting a 5%
chance of rejecting the null hypothesis what this is in fact true. A 5% level is the norm in psychology and behavioral
sciences.
Table
4: The values for the pairwise -
calculations. The quantity -
measures the fraction of
the variance that is due to actual differences in the experimental conditions, rather than random
variations between the subjects. Most of the values are around 0:5 or greater, indicating a strong
dependence of the variance on actual differences between the similarity measures.
The -
(the fraction of the variance due to actual differences among the measures) gives
the results in Table 4 The quantity -
measures the fraction of the variance that is due to actual
differences in the experimental conditions, rather than random variations between the subjects.
Most of the values are around 0:5 or greater, indicating a strong dependence of the variance on
actual differences between the similarity measures. The results of the comparison between the two
feature contrast measures is not as strong as the difference between these and the other measures,
although a value -
still indicates a significant effect.
This experiment is of course not conclusive, and it represent only a first step in the evaluation
of the similarity measures, for several reasons. First, due to a number of constraints, it was
possible only to check two of our subjects for stability. Since for both the subjects the ordering was
found stable (weighted displacement less than 0.02), we extrapolated to the other subjects. More
importantly, we didn't accurately determine the influence of the parameters on the evaluation,
although partial results seem to indicate that the performance is relatively stable in the presence
of changes. On the other hand, the relatively small number of subjects is not a serious problem in
this case since due to the high value of -
the sensitivity of the experiment is around 0.8, which is
considered an acceptable value [10].
4.2 Similarity of Textures
In this section, we consider the determination of similarity of texture images. Texture identification
is an important problem in computer vision which has received considerable attention (see, for
instance, [13, 14].) In this experiment, we are concerned with texture similarity: given a texture
sample, find similar samples in a database.
We used 100 images from the MIT VisTex texture database [28]. The database contains images
extracted from different classes of textures, like bark, bricks, fabric, flowers, and so on. Textures
were characterized using the Gabor features introduced in [14]. These features work on graylevel im-
ages, so color was disregarded for the whole experiment (e.g. human subjects were shown graylevel
versions of the texture images.) Also, based on the results of the previous experiment, we tested
only the Euclidean and the Fuzzy Feature Contrast metrics.
4.2.1 Distance Measurement
Manjunath and Ma's features [14]) are collected in a vector of 60 elements. If we measure the
Euclidean distance between two raw vectors, we could encounter scale problems: features that have
an inherently larger scale would be predominant. This is especially a problem for the Euclidean
distance, since FFC normalized all the features in [0, 1] via the membership function. In order to
provide a more objective comparison, we tried two types of Euclidean measure: normalized and
not normalized. Let x the feature vector of the i-th image.
We compute the componentwise averages
and the componentwise standard deviation
"N
With these definitions, the scaled Euclidean distance is defined as:
Experimentally, the two distances gave similar results, the scaled distance being slightly better than
the unscaled. In the rest of this section we will only consider the scaled Euclidean distance (from
now on we will just call it the Euclidean distance for the sake of brevity.) The distance measure
for FFC is given by (24) with membership function
4.2.2 Method
Due to the substantially larger size of the database in this example, it was unpractical to use
the same method as in the previous example. While it is feasible to ask a subject to order 10
sketches with respect to a stimulus, it is unfeasible to ask to rank 100 texture images. Therefore,
we followed a different procedure. For a given experiment, we selected one query image, x q , ordered
the database using both the Euclidean and the FFC measures, and, for each measure, collected the
images closer to the query. Let AE and A T be the sets of the ten images closer to the query
using the Euclidean and the FFC measures respectively. We then considered the set
of the images returned by either of the queries. In our case this set contained between 12 and
images, depending on the number of images common to the two queries.
The set A was presented to our subjects, asking them to rank the images with respect to
the query. We then took the first 10 images ranked by the subjects and compared them with
the ordering obtained by the two similarity measures using the same measure as in the previous
experiment.
Euclid
Subject
Figure
7: Similarity results for one of the textures in the database. Orderings obtained by the FFC
distance, the Euclidean distance, and a human subject.
Table
5: Average ( -
of the weighted displacement for the 2 measures consid-
ered. E: Euclid. FFC: Fuzzy Feature contrast.
Note that with this technique it is impossible to provide an absolute measure of the performance
of a certain similarity measure with respect to human performance. This is because our subjects
don't see the whole database. There might be images in the database that a person would judge
very similar to the query but, if both our distance measures miss them, the subject will never see
them. The only results that this technique can give is a measurement of the relative performace of
two similarity measures.
Fig. (7) shows a sample experiment. The first row contains the top 10 images returned by the
FFC distance. The second row contains the top 10 images returned by the Euclidean distance. All
images contained in the first two rows were shown to one of our subjects and she was asked to rank
them. The results are shown in the third row of Fig. 7.
4.2.3 Results
The average value ( -
and the variance (oe 2 ) of the weighted displacement measure for the Euclidean
and the FFC distances are reported in Table 5
This experiment gave us a value which implies that the difference is statistically
significant with
conventionally, means tha the effect to the distance
measure is "large" (a significant portion of the variance is due to the distance measure and not to
subject variation.)
Conclusions
In visual information systems it is important to define exactly the operation of similarity assessment.
While matching is defined essentially on logic grounds, the definition of similarity assessment must
have a strong psychological component. Whenever a person interrogates a data repository asking
for something close, or related, or similar to a certain description or to a sample there is always
the understatement that the similarity at stake is perceptual similarity. If our systems have to
respond in an "intuitive" and "intelligent" manner, they must use a similarity model resembling
the humans'.
One problem with the psychological view is that often we don't have mathematical or computational
models that can be applied to artificial domains. In this paper we have explored the
psychological theories that are closer in spirit to the needs of computer scientists.
Most of the similarity theories proposed in literature reject some or all the geometric distance
axioms. The more troublesome axiom is the triangle inequality, but other properties, like symmetry
and the constancy of self similarity have been challenged. Also, nonlinearities enter in the similarity
judgment both at the feature level (Fechner's law [12]) and during similarity measurement.
One of the most successful models of similarity is Tversky's feature contrast which, incidentally,
is also the most radical in the refusal of the distance axioms. In this paper we have used fuzzy logic
to extend the field of applicability of the model. Also, the use of fuzzy logic allows us to model
the interference between the features upon which the similarity is based. By interference we mean
that the judged truth of a property, like the fact that a line is long, does not depend only on the
measured length of the line, but also on the relationships between the line and the other elements
in the image. We have shown that it is possible to model this interference using a suitable fuzzy
measure.
An important problem that we could not address in this paper is the determination of the
parameters of the similarity measure. The parameters ff and fi in (23), the constants fl in (47),
and the parameters of the membership function influence the similarity measure. This topic is
considered in [18].
Acknowledgments
The authors gratefully acknowledge the anonymous reviewers for the many helpful comments and
criticism on earlier drafts of the paper.
--R
Toward a unified theory of similarity and recognition.
Dimensions of similarity.
Perception and the Repreentative Design of Psychological Experiments.
Topological methods in cardinal utility theory.
Benchmarking multimedia databases.
A multidimensional stochastic theory of similarity.
Fundamental of Digital Image Processing.
Machine Vision.
Design and Analysis.
Concerning the applicability of geometric models to similarity data: The interrelationship between similarity and spatial density.
The Wave Theory of Difference and Similarity.
Texture features for browsing and retrieval of image data.
Modeling of natural objects including fuzziness and application to image understanding.
Cognitive reference points.
A measure of stimulus similarity and errors in some paired-associate learning tasks
The use of psychological similarity measure for queries in image databases.
Similarity is a geometer.
The analysis of proximities: Multidimensional scaling with unknown distance function.
Toward a universal law of generalization for physical science.
Color indexing.
A law of comparative judgement.
Multidimensional scaling of similarity.
Features of similarity.
The dimensional representation and the metric structure of similarity data.
Web Page.
--TR
--CTR
Antonio Adan , Miguel Adan, A Flexible Similarity Measure for 3D Shapes Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.11, p.1507-1520, November 2004
Suchendra M. Bhandarkar , Feng Chen, Similarity Analysis of Video Sequences Using an Artificial Neural Network, Applied Intelligence, v.22 n.3, p.251-275, May 2005
Jrgen Assfalg , Alberto Del Bimbo , Pietro Pala, Retrieval of 3D objects by visual similarity, Proceedings of the 6th ACM SIGMM international workshop on Multimedia information retrieval, October 15-16, 2004, New York, NY, USA
Yi Zhao , Wolfgang Halang, Rough concept lattice based ontology similarity measure, Proceedings of the 1st international conference on Scalable information systems, p.15-es, May 30-June 01, 2006, Hong Kong
Joo-Hwee Lim , Jian Kang Wu , Sumeet Singh , Desai Narasimhalu, Learning Similarity Matching in Multimedia Content-Based Retrieval, IEEE Transactions on Knowledge and Data Engineering, v.13 n.5, p.846-850, September 2001
Mario G. C. A. Cimino , Beatrice Lazzerini , Francesco Marcelloni, A novel approach to fuzzy clustering based on a dissimilarity relation extracted from data using a TS system, Pattern Recognition, v.39 n.11, p.2077-2091, November, 2006
Raghu Krishnapuram , Swarup Medasani , Sung-Hwan Jung , Young-Sik Choi , Rajesh Balasubramaniam, Content-Based Image Retrieval Based on a Fuzzy Approach, IEEE Transactions on Knowledge and Data Engineering, v.16 n.10, p.1185-1199, October 2004
Tamalika Chaira , A. K. Ray, Fuzzy approach for color region extraction, Pattern Recognition Letters, v.24 n.12, p.1943-1950, August
Joselto J. Chua , Peter E. Tischer, A similarity measure based on causal neighbours and mutual information, Design and application of hybrid intelligent systems, IOS Press, Amsterdam, The Netherlands,
Huizhong Long , Wee Kheng Leow, Perpetual consistency improves image retrieval performance, Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, p.434-435, September 2001, New Orleans, Louisiana, United States
Horst Eidenberger, Distance measures for MPEG-7-based retrieval, Proceedings of the 5th ACM SIGMM international workshop on Multimedia information retrieval, November 07-07, 2003, Berkeley, California
Baback Moghaddam , Qi Tian , Neal Lesh , Chia Shen , Thomas S. Huang, Visualization and User-Modeling for Browsing Personal Photo Libraries, International Journal of Computer Vision, v.56 n.1-2, p.109-130, January-February 2004
Byeong Hwan Jeon , Kyoung Mu Lee , Sang Uk Lee, Face detection using a first-order RCE classifier, EURASIP Journal on Applied Signal Processing, v.2003 n.1, p.878-889, January
Artur Ziviani , Serge Fdida , Jos F. de Rezende , Otto Carlos M. B. Duarte, Improving the accuracy of measurement-based geographic location of internet hosts, Computer Networks and ISDN Systems, v.47 n.4, p.503-523, 15 March 2005
Wai-Tak Wong , Frank Y. Shih , Jung Liu, Shape-based image retrieval using support vector machines, Fourier descriptors and self-organizing maps, Information Sciences: an International Journal, v.177 n.8, p.1878-1891, April, 2007
Goran Nenadi , Irena Spasi , Sophia Ananiadou, Automatic discovery of term similarities using pattern mining, COLING-02 on COMPUTERM 2002: second international workshop on computational terminology, p.1-7, August 31, 2002
Zhizhen Liang , Pengfei Shi, Similarity measures on intuitionistic fuzzy sets, Pattern Recognition Letters, v.24 n.15, p.2687-2693, November
Behrooz Kamgar-Parsi , Behzad Kamgar-Parsi , Anil K. Jain , Judith E. Dayhoff, Aircraft Detection: A Case Study in Using Human Similarity Measure, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.12, p.1404-1414, December 2001
Rainer Lienhart , Wolfgang Effelsberg , Ramesh Jain, VisualGREP: A Systematic Method to Compare and RetrieveVideo Sequences, Multimedia Tools and Applications, v.10 n.1, p.47-72, January 2000
Noureddine Abbadeni, Content representation and similarity matching for texture-based image retrieval, Proceedings of the 5th ACM SIGMM international workshop on Multimedia information retrieval, November 07-07, 2003, Berkeley, California
Simone Santini , Amarnath Gupta , Ramesh Jain, Emergent Semantics through Interaction in Image Databases, IEEE Transactions on Knowledge and Data Engineering, v.13 n.3, p.337-351, May 2001
Giang P. Nguyen , Marcel Worring , Arnold W. M. Smeulders, Similarity learning via dissimilarity space in CBIR, Proceedings of the 8th ACM international workshop on Multimedia information retrieval, October 26-27, 2006, Santa Barbara, California, USA
Yixin Chen , Henry L. Bart, Jr. , Fei Teng, A content-based image retrieval system for fish taxonomy, Proceedings of the 7th ACM SIGMM international workshop on Multimedia information retrieval, November 10-11, 2005, Hilton, Singapore
Dietrich Van der Weken , Mike Nachtegael , Etienne Kerre, Combining neighbourhood-based and histogram similarity measures for the design of image quality measures, Image and Vision Computing, v.25 n.2, p.184-195, February, 2007
Jaeyeon Lee , Se Yoon Jeong , Kyu Seo Han , Byung Tae Chun , Younglae J. Bae, Image Navigation: A Massively Interactive Model for Similarity Retrieval of Images, International Journal of Computer Vision, v.56 n.1-2, p.131-145, January-February 2004
Eike Schallehn , Kai-Uwe Sattler , Gunter Saake, Efficient similarity-based operations for data integration, Data & Knowledge Engineering, v.48 n.3, p.361-387, March 2004
John Zachary , S. S. Iyengar , Jacob Barhen, Content based image retrieval and information theroy: a general approach, Journal of the American Society for Information Science and Technology, v.52 n.10, p.840-852, August 2001
Dorin Comaniciu , Peter Meer , David Tyler, Dissimilarity computation through low rank corrections, Pattern Recognition Letters, v.24 n.1-3, p.227-236, January
Masoud Saeed , Hossein Nezamabadi-Pour, Fuzzy color quantization and its application in content-based image retrieval, Proceedings of the 2nd WSEAS International Conference on Circuits, Systems, Signal and Telecommunications, p.60-66, January 25-February 27, 2008, Acapulco, Mexico
Pascal Matsakis , James M. Keller , Ozy Sjahputera , Jonathon Marjamaa, The Use of Force Histograms for Affine-Invariant Relative Position Description, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.1, p.1-18, January 2004
Zoran Steji , Yasufumi Takama , Kaoru Hirota, Mathematical aggregation operators in image retrieval: effect on retrieval performance and role in relevance feedback, Signal Processing, v.85 n.2, p.297-324, February 2005
Leonid Kompanets, Some advances and challenges in live biometrics, personnel management, and other "Human being" applications, Enhanced methods in computer security, biometric and artificial intelligence systems, Springer-Verlag, London, 2005
Bertrand Zavidovique , Vito Di Ges, The S-kernel: A measure of symmetry of objects, Pattern Recognition, v.40 n.3, p.839-852, March, 2007
Nezamabadi-pour , E. Kabir, Image retrieval using histograms of uni-color and bi-color blocks and directional changes in intensity gradient, Pattern Recognition Letters, v.25 n.14, p.1547-1557, 15 October 2004
H. Bustince , M. Pagola , E. Barrenechea, Construction of fuzzy indices from fuzzy DI-subsethood measures: Application to the global comparison of images, Information Sciences: an International Journal, v.177 n.3, p.906-929, February, 2007
Yacov Hel-Or , Hagit Hel-Or, Real-Time Pattern Matching Using Projection Kernels, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.9, p.1430-1445, September 2005
A. Engbers , Arnold W. M. Smeulders, Design Considerations for Generic Grouping in Vision, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.4, p.445-457, April
Arnold W. M. Smeulders , Marcel Worring , Simone Santini , Amarnath Gupta , Ramesh Jain, Content-Based Image Retrieval at the End of the Early Years, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.12, p.1349-1380, December 2000 | image distances;content-based retrieval;similarity measures;image databases;perceptual similarity |
317993 | Efficient admission control algorithms for multimedia servers. | In this paper, we have proposed efficient admission control algorithms for multimedia storage servers that are providers of variable-bit-rate media streams. The proposed schemes are based on a slicing technique and use aggressive methods for admission control. We have developed two types of admission control schemes: <i>Future-Max</i> (FM) and <i>Interval Estimation</i> (IE). The FM algorithm uses the maximum bandwidth requirement of the future to estimate the bandwidth requirement. The IE algorithm defines a class of admission control schemes that use a combination of the maximum and average bandwidths within each interval to estimate the bandwidth requirement of the interval. The performance evaluations done through simulations show that the server utilization is improved by using the FM and IE algorithms. Furthermore, the quality of service is also improved by using the FM and IE algorithms. Several results depicting the trade-off between the implementation complexity, the desired accuracy, the number of accepted requests, and the quality of service are presented. | Introduction
Recent developments in computer systems and high speed networks have propelled
the research on multimedia systems. A multimedia system requires the integration of
communication, storage, retrieval, and presentation mechanisms for diverse data types
including text, images, audio, and video to provide a single unified information system.
The potential applications of multimedia systems span into domains such as computer-aided
design, education, entertainment, information systems, and medical imaging. An
efficient support mechanism for such a diverse class of application requires a suitable storage
server connected to the clients through high speed networks [1]. Given a network set-up, the
architecture and organization of the storage server has a significant impact on the service of
multimedia clients. The design issues associated with the multimedia storage servers (MSS)
differ from those associated with the services that support traditional textual and numeric
data because of the difference in the characteristics of multimedia streams. A multimedia
stream consists of a sequence of media quanta, such as audio samples and video frames,
which convey meaning only when played continuously in time unlike the traditional textual
streams [2].
An MSS should ensure that the retrieval of media streams occur at their real-time rate
[3]. As the total bandwidth from the storage devices attached to the server via network to
the clients is fixed, an MSS can only support a limited number of clients simultaneously [4].
Hence, before admitting a new request, an MSS must ensure that the real-time retrieval
process of the existing clients (the streams that are currently being served) are not violated.
The checking of this constraint and determination of the acceptance/rejection of a new request
is done through the admission control algorithm employed in the MSS. The admission
control algorithm checks if the available bandwidth is sufficient for the total bandwidth required
by the streams currently being retrieved plus the bandwidth requirement of the new
request. If it is sufficient, the server can accept the new request. Otherwise, the admission
of the new request may introduce distortions or jitters in the audio or video quality
[5, 6, 7, 8, 9, 10]. However, disturbances due to minor discontinuity of real-time playback
may not be perceivable, and in some cases, acceptable at a lower cost by the clients. Based
on the required Quality of Service (QoS), the admission control algorithm decides whether
or not to accept the new request.
Several admission control schemes have been proposed in the literature. Detailed descriptions
of some of these schemes are presented in Section 2. The goal of the admission
control schemes is to maximize the server utilization (by admitting as many requests as
possible) while satisfying the QoS requirements. However, these two requirements are conflicting
in nature. Most of the previously proposed schemes tend to sacrifice one in favor
of the other. In this paper, we propose a set of aggressive admission control algorithms
that maximize the server utilization as well as provide high QoS. Traditional admission
control algorithms use statistical data of each stream, such as maximum consumption rate,
average consumption rate, and distribution of consumption rate [2, 5, 6, 8, 9, 10, 11]. In
the proposed approach, a complete profile of the media streams are computed while they
are stored. The profile includes the consumption rate or the bandwidth requirements of
the media stream. This information can be used by the server to reserve bandwidth and
facilitate admission control. However, to reduce the computational overheads, the profiling
is done by slicing the media streams into equal-sized time intervals. Granularity of these
intervals affects the performance of the admission control schemes.
Two different type of admission control schemes, namely, Future-Max (FM) and Interval
Estimation (IE) are developed in this work. In the FM algorithm, the maximum bandwidth
requirement in future for a stream is used as its estimated bandwidth. For the family of
IE algorithm, a combination of the maximum and average bandwidths is used for the
bandwidth estimation. Different combinations of the maximum and average bandwidths
result in different admission control schemes and yield different performance. The proposed
admission control schemes are evaluated through simulation experiments. The performance
improvement obtained using the FM algorithm increases upto a certain accuracy level and
remains constant thereafter. With the IE algorithms, the performance improvement is
almost linear. It is also observed that for a fixed number of clients, the QoS of the media
streams improves with respect to the accuracy level. Several other performance results
have been illustrated to demonstrate the validity of the proposed algorithm. The trade-off
evaluation between the performance and implementation complexity is also reported.
Based on the desired accuracy, QoS, and implementation simplicity, a suitable admission
control scheme can be adopted from the family of algorithms proposed in this paper.
The rest of the paper is organized as follows. In Section 2, we review the requirement of
admission control policy, and the advantages and disadvantages of the previously proposed
admission control policies. In Section 3, we present the FM and IE algorithms and discuss
the issues related to their implementation. In Section 4, we present the simulation results
and discussions followed by the concluding remarks in Section 5.
Preliminaries
In this section, we classify and discuss the previously proposed admission control
schemes. The characteristics of the media streams are also analyzed. The requirements
of a good admission control scheme along with the limitations of the previously proposed
schemes are also reported.
2.1 Classification of Admission Control Schemes
The admission control schemes proposed in the literature can be classified into four
categories as follows [4].
ffl Deterministic: All the real time requirements are guaranteed to be met. With these
algorithms, the server uses the worst case consumption rate to reserve bandwidth
[2, 5, 6, 8, 9, 10]. The media streams have a perfect QoS while using the deterministic
algorithm. Because of the pessimistic approach used for bandwidth reservation, the
server utilization is usually low.
ffl Predictive: Server does the admission control based on the bandwidth requirement information
measured during the last few time periods. The immediate past bandwidth
requirements plus the average bandwidth requirement of the newly requested stream
is assumed as an estimation for the future bandwidth requirement [12]. Although
there is no guaranteed QoS, the server will accept a request for a media stream only
if it predicts that all deadlines will be met satisfactorily in the future. It is observed
through experimentation that the QoS does not degrades noticeably compared to the
deterministic case.
ffl Statistical: The statistical admission control algorithms use probabilistic estimates to
provide a guaranteed QoS to the accepted requests. The server considers the statistical
behavior of the system in order to provide such guarantees [11]. One example is
the use of average bandwidth as an estimation of the bandwidth requirement of the
media streams in future.
Normally, in real-time environment, the server provides the best effort
service to non real-time requests. Thus the admission control algorithm does not
guarantee any deadline to be met. The server will accept a request regardless of its
bandwidth requirement and does its best to serve. The QoS is usually low in this
case, but the server utilization is high.
2.2 Characteristics of Multimedia Streams
The input data of an admission control algorithm includes the data associated with
the server and the data associated with the stream. The data related to server refer to
the total available bandwidth that can be supported by the system configuration. The
media stream data refers to the stream bandwidth requirements which is determined by
the consumption rate of the streams. The admission control algorithms use these two kinds
of data to decide whether or not to admit a new media stream request. The total available
bandwidth is a function of the hardware parameters and the disk scheduling method. The
stream bandwidth requirements, although vary with respect to time, are fixed after the
stream is stored.
If a media is a Constant Bit Rate (CBR) stream, its bandwidth requirement for the worst
case and the average case will be the same. Due to the high bandwidth requirement of
video and audio streams, it is not cost-effective to store and transmit them in their original
formats. Usually, some kind of compression method is used to reduce their bandwidth
requirements. The compressed streams are Variable Bit Rate (VBR) streams. Average
bandwidth requirements of various CBR and VBR streams are shown in Table 1 [13]. The
main characteristic of the VBR stream is that the consumption rate changes with time due
to the different compression ratios of different segments of a stream. The worst case requires
the maximum bandwidth which corresponds to the maximum consumption rate and the
lowest compression ratio. The average case corresponds to the data related to the average
consumption rate which relates to the average compression ratio. Since video/audio CBR
streams require huge bandwidth, most of the media streams are stored as VBR streams.
In this paper, we propose admission control scheme for MSSs that store and serve VBR
media streams.
2.3 Requirements of Admission Control Schemes
The main function of an admission control algorithm is to reserve bandwidth corresponding
to the requirements of a media stream at the admission time to guarantee the
required QoS during playback. If the server can reserve the bandwidth for a request stream
successfully, it accepts the request. Otherwise, it rejects the request. The total available
Media Type Data Rate
Voice quality audio 64 Kbits/sec
bit samples at 8 kHz)
MPEG encoded audio, compressed VBR 384 Kbits/sec
(equivalent to CD quality)
CD quality audio 1.4 Mbits/sec
bit samples at 44.1 kHz)
MPEG-2 encoded video, compressed VBR 0.42 MBytes/sec
NTSC quality video 27 MBytes/sec
quality video 81 MBytes/sec
Table
1: Bandwidth Requirement for Typical Digital Multimedia Streams.
bandwidth is fixed. The more the bandwidth reserved for a specific stream, the less is the
number of streams that a server can support simultaneously. Streams may have different
bandwidth reservations due to the difference in the desired QoS or bandwidth estimation.
The server utilization of the deterministic admission control algorithms is much lower
than the predictive or the statistical algorithms [14]. As a result, a server cannot support
more streams in the deterministic case compared to the predictive or the statistical
schemes. This is because of the fact that the total number of streams that the server can
support simultaneously is proportional to the server utilization. The difference between
the maximum and average consumption rate or the actual consumption rate degrades the
server utilization for a given media stream. If the server uses the deterministic control
policy [2, 5, 6, 8, 9, 10] to do the admission, then the maximum consumption rate is used
for bandwidth reservation. However, a stream is not always in the state of its maximum
consumption rate. For the non-peak periods, the bandwidth requirement of a stream is
well below the bandwidth reserved for it. During these periods, the server utilization is
low. The predictive or the statistical control policies use the consumption rate that is observed
during a few past scheduling rounds [12], or corresponding to the distribution of the
consumption rate [11]. Although these schemes have the possibility of missing deadlines
of a stream as opposed to the deterministic scheme, they can support more streams than
the deterministic approach. The server utilization is thus higher than the deterministic
algorithm. In Figure 1, we show the typical bandwidth requirements of a VBR stream and
compare it to the bandwidth reserved in deterministic, predictive and a statistical algorithm
(based on averaging). The graph explains the reason why the deterministic algorithm
Actual Bandwidth Requirement
Bandwidth
Time
Predictive Bandwidth Requirement
Statistical Bandwidth Requirement
Deterministic Bandwidth Requiement
Figure
1: Reserved Bandwidth for Different Admission Policies.
has low server utilization. We introduce the notion of Estimation Error (EE) that defines
the absolute difference between the estimated bandwidth, BE(t)(that is reserved), and the
actual bandwidth, A(t). Assuming T as the total time of playback for a stream, EE can
be expressed as
The EE due to the over estimation of the bandwidth requirement is given as
The EE due to the under estimation of the bandwidth requirement is given as
where
The EE can then be computed as
of an admission algorithm is much higher, its server utilization is much less than
1. EE \Gamma is related to the guaranteed QoS. If EE \Gamma is high, the guaranteed QoS is low.
It can be observed that EE
deterministic
predictive , and EE
deterministic
statistical .
Hence, the deterministic algorithms result in less server utilization than the predictive or
the statistical algorithms. As the EE \Gamma
the deterministic algorithms provide
the highest QoS.
A good admission control algorithm should guarantee that the server meets the deadlines
with respect to the specified QoS requirement. Furthermore, it should result in high
server utilization which, in turn, enables the server to support more number of streams
simultaneously. A good admission control algorithm should have less EE + and EE \Gamma . In
other words, it should have less EE. Thus, a good admission control algorithm should
accurately model the system including the configuration, the scheduling method, and the
characteristics of the stream.
The disadvantage of current admission control algorithms [2, 5, 6, 8, 9, 10, 11, 12] is that
they only use a few statistical data to represent the server behavior and the media streams.
This method facilitates the implementation, reduces the complexity of computation, and
requires less storage space. However, the streams are not estimated accurately which
may lead to poor server utilization. For example, current deterministic admission control
algorithms [2, 5, 6, 8, 9, 10] use the maximum consumption rate of a whole stream to
make the acceptance/rejection decision. This is a global value reflecting the behavior of
the whole stream. Although the server should use the worst case of a stream and the
server to do deterministic admission, it does not necessarily mean that the worst case of
the server is when all the streams are at their consumption peak (worst case) because that
may not happen at the same time. In fact, these peaks are more likely to be distributed
uniformly. The current policies use this worst case scenario to employ admission control.
In reality, there is a very small possibility that all the streams reach their consumption
peak at the same time. Especially, when you have large number of streams being played,
this probability is negligible. The predictive policy [12] uses the observed behavior of the
system to estimate the future behavior. This method may not be an accurate model of
system, although it improves the server utilization compared to the deterministic admission
control scheme. The statistical policy [11] is complicated than the other two. It uses the
distribution of bandwidth requirement to represent a stream. In fact, these values are
also global parameters. The drawback of using the global values of a stream is that the
local behavior of the streams are not captured. For example, the distribution of different
segments will be varied. So the distribution of the beginning may not be the same as the
end. What we need is the local distribution of each stream at the same time points. These
time points are not relative to the beginning of each stream. They refer to the time points
or the snapshots at which the server plays back the media streams.
Inaccurate modeling of the server and/or the streams will degrade the performance of an
admission control algorithm. The server should be modeled to obtain the total bandwidth
limitation with respect to the hardware configuration and the scheduling method. For
modeling the streams, we need to decrease the gap between BE(t) and A(t). In the next
section, we present reasonably accurate modeling techniques of the media streams.
3 Slicing-Based Admission Control Algorithms
The responsibilities of a multimedia server include retrieval of media streams from the
disks as well as recording of media streams on to the disks. The retrieval process is a real-time
on-line service. Recording can be treated as an off-line service of the server, where
the real-time constraints are imposed between the original recording device and the event
being recorded. In this paper, we consider the on-line service issues and thus discuss only
the retrieval process of media streams from the server.
In this paper, we do not consider the issues associated with disk scheduling algorithms.
We assume that the server has a capacity of providing certain bandwidth. This bandwidth
may be considered as the worst case bandwidth that can be provided by the disks and the
server. Using efficient disk scheduling algorithms, the available bandwidth at the server
may be improved. This improved bandwidth can be also used for the proposed algorithms.
In other words, the proposed admission control schemes are not dependent on the disk
scheduling algorithms. We just consider a fixed bandwidth that is guaranteed by the server
at any time.
3.1 Slicing Method
A server needs to reserve an estimated bandwidth for the streams already in service
before allowing a new request to be accepted. The reservation and checking of admissibility
are handled by the admission control algorithm. For the actual bandwidth requirement,
A(t), of a stream, the bandwidth estimation, denoted as BE(t), can be done in several
ways. The deterministic algorithm uses a conservative estimate by considering the worst
case scenario. The bandwidth estimation, BE det , for a stream using the deterministic
admission control is determined as
where T is the total time length of the stream. A statistical algorithm may use the average
bandwidth requirement as its estimated bandwidth, BE stat , which is derived from
R T
The proposed approach relies on a closer estimation of A(t). The retrieval of a media
stream is done only after the completion of its storage. When a VBR media stream is
stored, a complete and accurate description of the rate changes could be computed. This
is the profile of bandwidth requirements of the VBR media stream. The server can use this
information during playback for admission control. However, a trade-off analysis is essential
for evaluating the increase in acceptance of the requests with respect to the additional
overheads.
We introduce a method based on slicing to obtain the estimated bandwidth requirement,
respect to the slicing interval t s . By using the slicing scheme, we divide [0; T
into several small time intervals of the same size t s (see Figure 2). The maximum value of
t s can be the total time of playback of the stream (T ). The minimum value of t s is one
time unit, which is denoted as t unit . The time unit can be the time length of a scheduling
round or even smaller.
(n-1)t
Interval Interval Interval
Figure
2: The Slicing Scheme.
The smaller the slicing interval, the more accurate will be the bandwidth estimation.
With small slicing intervals, the EE reduces. However, the implementation and computation
complexity increases with the reduction of the size of the slicing intervals. Further-
more, the size reduction of intervals beyond a certain point may not have any impact on
the performance improvement. These issues are addressed along with quantitative results
in Section 4. We have expressed the granularity of intervals in terms of the accuracy level.
A 100% accuracy level corresponds to the case when t a 0% accuracy level
corresponds to the case, t
The bandwidth estimation based on the slicing method can be done in two different
ways. The first method corresponds to the deterministic estimation and uses the maximum
value within an interval as the estimated bandwidth requirement for the entire interval.
The estimated bandwidth based on slicing for the ith interval, denoted as BE smax (i), is
expressed as
The second method is a statistical scheme that uses the average of A(t) within an interval as
the estimated bandwidth. This bandwidth based on the slicing scheme for the ith interval
is denoted as BE save (i) and is computed from
BE save
R (i+1)ts
it s
A(t)dt
In
Figure
3, we show a typical graph of A(t); BE smax (i) and BE save (i). It can be
Bandwidth
Time
s
save
Figure
3: Difference between A(t); BE smax (i) and BE save (i).
observed from the Figure that as t s gets larger and larger, BE smax (i) will get close to
BEmax . BE save (i) follows the same trend. For the extreme case, when t
equals BEmax and BE save (i) is BE ave . These estimations have the lowest accuracy level.
This scenario can be expressed as
BE save
The other extreme value of t s is t unit . In this case, within t unit , A(t) is constant. So at this
time, BE smax (i) and BE save (i) have the same value as A(t). These are the best estimations
and can be expressed as
By using different combinations of BE smax and BE save and different interval size t s ,
we can get different bandwidth estimations, BE(i). Thus BE(i) can be expressed as a
function as follows.
Different expressions for BE(i) will result in generating different EE. A good admission
control algorithm can be obtained from an expression of BE(i) that has a low EE.
In the next two subsections, we introduce new admission control algorithms based on
the proposed slicing scheme.
3.2 Future-Max Algorithm
In this subsection, we introduce a new deterministic admission control algorithm which
is based on the future maximum bandwidth requirement. Future maximum bandwidth
refers to the maximum bandwidth required from the current time point to the end of the
playback of the media stream. We term this algorithm as the Future-Max (FM) algorithm.
The concept behind the FM algorithm can be explained as follows. In the deterministic
admission control scheme, the reserved bandwidth for a stream corresponds to its maximum
bandwidth. After the playback of the portion that requires the maximum bandwidth, it
is not necessary to reserve resources corresponding to the maximum bandwidth. It is
definitely beneficial to use the maximum bandwidth of the portions that is not played back
instead of the whole stream. The FM algorithm scans through the future intervals in order
to determine the maximum bandwidth that is required in future and uses it for admission
control. The advantage of the FM algorithm can be observed from Figure 4. After the
playback of the media objects that corresponds to the maximum bandwidth, there is no
need to reserve bandwidth corresponding to the the maximum or the worst case. Thus
beyond the maximum point, the bandwidth reservation can be reduced and performance
can be gained as illustrated in Figure 4. The time of the occurrence of the maximum
bandwidth affects the performance gain obtained through the use of the FM algorithm.
Improvement Gaining Region
Bandwidth
Time
Deterministic's BE(i)
FM's BE(i)
Media Stream's A(t)
Worst Case
Figure
4: A(t) and BE(i) of Deterministic and FM Algorithms.
The slicing technique described in the previous subsection can be used to implement
the FM algorithm. The bandwidth estimation of an incoming request using the FM scheme
is denoted as BE FM (i), and is expressed as
Let l sac be the interval at which a new request arrives and the admission control scheme
examines whether it can be admitted or not. Let K be the number of streams currently
being served. The bandwidth requirements of the K streams is estimated as BE k (i),
Kg. The starting intervals of these streams could be different and
are represented as l k
start . Let the estimated bandwidth of the new request be BE new (i). A
boolean function fl can be defined as
start
l 2
where l end is the time at which the playback of the new stream is expected to end. The
acceptance or rejection decision of the admission control algorithm is based on the following
expression,
Reject 9 l s.t.
If the decision is "accept", the server will start service at the interval l sac + 1.
The differences between the deterministic and the FM algorithm can be elaborated as
follows. For the deterministic algorithm, the estimated required bandwidth is equal to
new
which is a constant. For the FM algorithm, the estimated required
bandwidth is a non-increasing function. Note that, the BE new
FM (i) at the time of admission
control is equal to the BE new
, as in the case of the deterministic scheme. This value
is used for the making the acceptance/rejection decision. Once a request is accepted, it
only reserves BE new
FM (i) which is less than or equal to BE new
used for reservation in the
deterministic case. If the maximum bandwidth(the worst case) is at the interval ffi, then
The performance improvement in terms of the EE is given by
3.3 Interval Estimation Algorithms
In this subsection, we propose a family of admission control policy based on the band-width
estimations for each of the sliced intervals. The family of Interval Estimation (IE)
algorithm uses these estimations to decide whether or not to accept a new request. The
estimations within the intervals could be deterministic, statistical, or a combination of the
two. A general expression for the bandwidth estimation of the ith sample using the IE
algorithms, BE IE (i) is given as
The value of ff and fi can be varied to obtained a family of admission control schemes.
The extreme values of ff, fi, t s and their corresponding BE IE are listed in Table 2. For
the deterministic admission control scheme, and the sampling time period
equals to the whole length of the stream (t The statistical admission control scheme
based on only the average bandwidth requirement refers to the case when
. The most accurate IE algorithm can be obtained by setting t
The shaded portion in Figure 5 shows the region where the BE IE will lie with different
values of ff and fi. This shaded portion is bounded by the curves BE smax (i) and BE save (i),
ave
Table
2: Typical Values of ff; fi; t s ; and Their Corresponding BE IE .
which corresponds to the cases of respectively. So the
relationship among BE IE (i), BE smax (i) and BE save (i) is given as
Bandwidth
Time
s
save
Figure
5: BE IE (i) and Its Relation to BE smax (i) and BE save (i).
In
Figure
5, when t s gets smaller, the corresponding BE smax (i) and BE save (i) become
closer to A(t). Since they are the upper and lower bounds of BE IE (i), BE IE (i) will be
also closer to A(t). In the extreme case, when t save (i) are
equal to A(t), which also equals to BE IE (i). For this case, the estimation error, EE of
BE IE (i) is 0. This scenario reflects the best estimate of the bandwidth requirement. So the
best bandwidth requirement estimation BE IE (i), where t is the optimal BE(i),
which results in the highest server utilization and provides the highest QoS. An intuitive
explanation of this statement is that when BE(i) is in its best case, we get the exact
bandwidth requirement at any given time point and use it to check and reserve bandwidth
of the server.
3.4 Implementation Complexity
In order to improve the server utilization in terms of the number of accepted requests for
media streams, we need to model the streams as accurately as possible such that the EE is
small. However, an accurate model may need large storage space and may incur high computational
complexity. In this subsection, we analyze these implementation complexities
and outline the trade-off between accuracy and complexity.
To analyze the storage space requirement, let us consider a video media stream as an
example. A typical 90 mins movie occupies 1GB to 3GB storage space using the MPEG-2
encoding scheme. A 100% accuracy level may require to store the bandwidth requirements
of each and every video frame. A video stream has frames per second. So the total size
of data for a typical video stream will be:
If we use floating point value to store these sliced bandwidths, the maximum amount of
extra storage required for a typical video stream will be:
Compared to the storage requirement of a single movie, even the worst case storage
requirement (100% accuracy level) is not too high. So the storage space requirement is not
a problem with the proposed slicing-based admission control schemes.
To reduce the computational complexity, the server can use a queue to store the sum
of the bandwidth requirements of all the accepted streams for all the slicing intervals.
The bandwidth requirements are stored in such a way that the head of the queue has
the total bandwidth requirement for the next interval. The next element of the queue
stores the bandwidth requirement of the one following the next interval, and so on. At the
time of admission control, the elements from the head of the queue are removed until the
bandwidth requirement at the next interval is found. For the FM algorithm, the BE FM (0)
of the new stream that is requested is added to the head of the queue and is examined for
acceptance or rejection as discussed earlier in Section 3.2. This takes O(1) time. If the
new request is admitted, the BE FM (i)'s are added to the corresponding elements of the
queue for bandwidth reservation. This operation takes O(L) time, where L is the number
of sliced intervals of the requested media stream. In the case of IE algorithms, admission
control algorithm needs to start examining all the queue elements from the header of the
queue until the end of the media stream that is being requested. The equations derived in
Section 3.3 are used to make the admission control decision. These operations require O(L)
computation. If the request is accepted, the bandwidth requirement of the new stream for
each of the sliced intervals are added to the corresponding elements of the queue. This
operation needs an additional O(L) computations.
The storage requirement and the computational complexity are directly proportional
to the accuracy level and can be reduced by lowering the accuracy level. The accuracy
level can be lowered by increasing the interval size. Lowering the accuracy level may in
turn lower the QoS and/or the server utilization. However, the decrease of the performance
measurements may not be linear. These trade-off are analyzed quantitatively in the next
section. In most cases, it may not be desirable to use a very high accuracy level. Significant
improvement in the number of accepted requests and the QoS can be achieved at reasonable
accuracy level for which the implementation complexity in terms of space and computation
will be affordable.
4 Experimental Evaluation
In this section, we evaluate the performance of the proposed class of admission control
schemes through simulation. The performance measures are defined followed by the
description of the simulation environment. The results accompanied by discussions are
reported in detail.
4.1 Performance Measures
The performance indicators of admission control schemes include the number of requests
for media streams that can be accepted and the QoS that can be guaranteed. The number
of requests for media streams that can be accepted is also dependent on the required or
acceptable QoS. The QoS refers to the proportion of the media streams that are played
back within their deadlines. We have defined two different types of QoS. The first type
refers to the average QoS for the whole stream. We denote it as QoS ave . The second type
is the worst case for the QoS at any time point. We denote it as QoSworst . These QoS
terms can be expressed as
R T
QoSworst
QoSworst corresponds to the minimum QoS that can be tolerated by a client. If a client is
tolerable to a degraded QoS (may be because of lower cost), we consider the required QoS
as QoSworst in order to ensure that the QoS never falls below the acceptable level. While
using the deterministic schemes, such as FM or IE (with 1:0), the actual bandwidth
requirement for the acceptable QoS is equal to the bandwidth requirement at
times the QoSworst . Thus, it is guaranteed that the QoS will not be worse than QoSworst . In
such cases, QoS ave will be much higher then QoSworst . Instead of considering the acceptable
QoS as QoSworst , if we regard that as QoS ave , then the server utilization could be improved.
However, the jitters may not be uniformly spread out and may concentrate at few time
periods and the QoSworst be well below the acceptable range. Similar issues have been
addressed recently using (m,k)-firm deadlines [15].
The server utilization is measured in terms of the total number of requests for media
streams that can be supported simultaneously. The accuracy level is measured in terms of
the size of the intervals, t s . If t s is equal to t unit , the accuracy level is defined as 100%. The
case of t corresponds to 0% accuracy.
4.2 Simulation Model
We have implemented a time driven simulator for the evaluation of the proposed al-
gorithms. In the simulator, we use the real-trace data of MPEG-1 frame size from the
University of Wuerzburg [16]. The frame size traces were extracted from MPEG-1 sequences
which have been encoded using the Berkeley MPEG-encoder (version 1.3) which
was adapted for motion-JPEG input. The frame sizes are in bits. The videos were captured
in motion-JPEG format from a VCR (VHS) with a SUN SPARCstation 20 and SunVideo.
The capture rate of the SunVideo video system was between 19 to 25 fps. The encoding
pattern is "IBBPBBPBBPBB" and the GOP size is 12.
Since there are 20 different traces, we assume that those VBR media streams are stored
in the server and the clients can request any one of them selected randomly. The simulator
has the following components - a stream bandwidth requirement generator, the slicing unit,
and the admission control unit.
The stream bandwidth generator is responsible for generating the bandwidth requirement
of the groups from the MPEG-1 real traces. Since a reasonable transferring unit is
a group in MPEG-1 format instead of frame, the generator adds up all the frame sizes
in a logical group and designates that to be the stream bandwidth requirement for that
group. Then the generated stream bandwidth requirements can be computed for the slicing
unit. The function of the slicing unit is to store the bandwidth requirements based on the
sliced intervals t s . The admission control unit decides the acceptance/rejection of the new
requests using the equations derived in Section 3.
The clients are assumed to request a new media stream at each time unit. Thus, there
will always be a media stream request waiting for service at the admission control unit. This
is done to ensure the effectiveness of the admission control scheme without the affect of the
new requests arrival pattern. We implement the FM and IE algorithms according to the
equations derived in Section 3. The performance parameters were obtained by measuring
the parameters several times and averaging the results.
4.3 Results and Observations
We present a comparative evaluation of the performance of the FM and IE algorithms.
The results are obtained with respect to the the accuracy level, and the variation of ff and
fi for the class of IE algorithms.
100100140180220Performance of the FM & IE Algorithm
Accuracy Level in %
Number
of
Requests
Accepted
Deterministic
Interval Estimation
Figure
Performance of the FM & IE Algorithms
Figure
6 shows the performance improvement obtained using the FM algorithm and
IE algorithm compared to the deterministic algorithm. It can be observed that the curve
obtained using the FM algorithm increase with the increase in the accuracy level. However,
the rate of increase diminishes with the increase in the accuracy level. Initially, with a
small increase in the accuracy level, the number of requests accepted increases noticeably.
It may not be necessary or worth implementing higher accuracy level at the cost of the
added complexity because of the small improvement in the acceptance rate. The overall
performance improvement is 46%.
In
Figure
6, we also show the curve corresponding to the IE algorithm with
and Thus the bandwidth requirement is estimated as the maximum bandwidth
requirement within the interval. The 0% accuracy level corresponds to the deterministic
case. The optimal case is reflected by the 100% accuracy level. With the increase in
accuracy level, the rate of increase in the number of requests accepted is monotonous. The
performance improvement is about 154%. It can be inferred that a significant performance
improvement is obtained by using the IE admission control scheme.
Performance of the FM Algorithm for Different QoS
Accuracy Level in %
Number
of
Requests
Accepted
Guaranteed worst
Guaranteed worst
Guaranteed worst
Figure
7: Performance of the FM Algorithm
for Different QoS.
100100140180220260Performance of the IE Algorithm for Different QoS
Accuracy Level in %
Number
of
Requests
Accepted
Guaranteed worst
Guaranteed worst
-X- Guaranteed worst
Figure
8: Performance of the IE Algorithm
for Different QoS.
Figures
7 and 8 show the number of streams that can be accepted for different values
of guaranteed QoSworst for the FM and IE respectively. As
expected, the number of accepted streams decreases with the increase in the guaranteed
QoSworst . For example, at 100% accuracy level, the performance improvement of the IE
algorithm is about 11% for QoSworst = 0:9, and 25% for QoSworst = 0:8 higher than the
performance of QoSworst = 1:0. It is observed that the accepted number of streams for the
FM algorithm does not vary significantly with the increase in the accuracy level beyond a
certain point. However, in the case of the IE algorithm, there is almost a linear increase in
the number of accepted streams with the increase in the accuracy level.
In order to guarantee QoSworst , we scale the total available bandwidth by a factor of
Measured Average QoS-Worst for the FM Algorithm
Accuracy Level in %
Average
Worst
Case
QoS
Guaranteed worst
Guaranteed worst
Guaranteed worst
Figure
9: Measured Average QoSworst
for the FM Algorithm.
1000.820.860.90.940.98Measured Average QoS-Worst for the IE Algorithm
Accuracy Level in %
Average
Worst
Case
QoS
Guaranteed worst
Guaranteed worst
-X- Guaranteed worst
Figure
10: Measured Average QoSworst
for the IE Algorithm.QoSworst . However, because of the EE (as discussed in Section 2), the measured average
QoSworst may be quite higher than the guaranteed QoSworst . This is demonstrated in
Figures
where the measured average QoSworst is obtained through simulation
experiments. It is observed that the measured average QoSworst is significantly higher
than the guaranteed QoSworst for the FM algorithm. This is because of the high EE
incurred by the FM algorithm. The difference between the measured QoSworst and the
guaranteed QoSworst reduces with the increase in the accuracy level for the IE algorithm.
With the increase in accuracy level, the EE decreases, thus lowering the difference between
the guaranteed QoSworst and the measured average QoSworst for the IE algorithm. Note
that at 100% accuracy level, the measured average QoSworst is identical to the guaranteed
QoSworst for the IE algorithm as the EE at this level is equal to zero.
The average QoS values were measured corresponding to the system configurations of
Figure
are illustrated in Figures 11 and 12 for the FM and IE algorithms,
respectively. As observed in the previous experiments, the QoS ave for the FM algorithm
Figure
is almost constant after an accuracy level of about 20%. However, the QoS ave
is significantly higher than the QoSworst (both guaranteed and measured average) for lower
values of QoSworst . Similar observations can be observed for the QoS ave of the IE algorithm
shown in Figure 12. The only difference is that the QoS ave degrades rapidly after a certain
accuracy level when the guaranteed QoSworst is less than 1.0 while employing the IE
algorithm. The rapid degradation is due to the sharp increase in the number of accepted
Measured Average QoS for the FM Algorithm
Accuracy Level in %
Measured
Average
QoS
Guaranteed worst
Guaranteed worst
Guaranteed worst
Figure
Measured Average QoS for
the FM Algorithm.
1000.820.860.90.940.98Measured Average QoS for the IE Algorithm
Accuracy Level in %
Measured
Average
QoS
Guaranteed worst
Guaranteed worst
-X- Guaranteed worst
Figure
12: Measured Average QoS for
the IE Algorithm.
100100140180220Server Utilization for Different Values of Max/Average
Accuracy Level in %
Number
of
Requests
Accepted
Max/Average
Figure
13: Server Utilization for Different
Values of ff and fi.
QoS for Different Values of Max/Average
Accuracy Level in %
Average
Worst
Case
QoS
Max/Average
Figure
14: QoS for Different Values
of ff and fi.
requests.
In
Figure
13, we show the variation of the number of accepted streams with respect
to different values of ff and fi. The corresponding variations in the QoS is depicted in
Figure
14. The max and average values in Figures 13 and 14 denote ff and fi, respectively.
For higher values of fi, the number of accepted streams is high with minimal variation.
Correspondingly the QoS values are lower as shown in Figure 14. All the curves with
different values of ff and fi converge at the 100% accuracy level as shown in Figures 13 and
14. This is because of the fact that at 100% accuracy level, BE max and BE ave have the
same value which is equal to A(t). So the corresponding BE IE (i) is the same at this level.
With the increase in accuracy level, the EE is reduced, leading to an increase in the QoS.
Thus the curves in Figure 14 mostly have a positive slope. However, it is quite interesting
to observe that for certain values of ff and fi, the QoS factor degrades at higher accuracy
level. At higher accuracy levels, the number of accepted requests is high. Thus there is an
increase in the total bandwidth requirement, which explains the decreasing trend of QoS.
Accuracy Level in %
Average
Worst
Case
QoS
Figure
15: Measured Worst QoS for
a fixed Server Utilization.
Accuracy Level in %
Measured
Average
QoS
Figure
Measured Average QoS for
a fixed Server Utilization.
Next, we compare the effect of the accuracy level on the QoS. We measure the QoSworst
and QoS ave values with the same server utilization with respect to the accuracy level. The
results are illustrated in Figures 15 and 16, respectively. The server utilization is held
constant by keeping the number of accepted requests fixed. The results are shown for the
IE algorithm with the increase in accuracy level, the EE \Gamma is reduced
and thus there is an increase in both types of QoS. It is observed that both QoSworst
and QoS ave increase sharply at the low accuracy level. This trend further advocates the
inference that significant gain in performance can be achieved with a coarse-grain slicing
of the bandwidth requirements of the media streams.
Concluding Remarks
In this paper, we have proposed a new family of admission control algorithms. These
algorithms are based on a slicing technique and use an aggressive method to compare and
reserve the bandwidth available at the server. Two types of admission control schemes are
proposed. The first scheme, called FM algorithm, is based on the maximum bandwidth
requirement in future for the media streams. The second algorithm, called IE algorithm,
defines a class of algorithms. The IE algorithm uses a combination of the maximum and
average bandwidth requirement within each interval to estimate the bandwidth. Different
IE algorithms can be developed by varying the proportion of maximum and average band-width
requirements within each of the sliced interval. The length of the slicing interval can
be varied to obtain different levels of accuracy. We have discussed the trade-off between
the accuracy level, the implementation complexity, and the performance of the admission
control algorithm.
The performance of the proposed admission control schemes are evaluated through
simulation experiments. It is observed that the performance improvement with the FM
algorithm is almost negligible beyond the 20% accuracy level. However, the performance
improvement in terms of the number of streams supported is almost linear in case of the IE
algorithms. For a fixed server utilization, the QoS of the servers improves with the increase
in accuracy level in the case of both FM and IE algorithms. Thus, an efficient algorithm
for the family of algorithm proposed here can be adopted by a server on the basis of the
required QoS and performance with respect to the implementation complexity.
--R
"News On-Demand for Multimedia Networks,"
"I/O Issues in a Multimedia System,"
"Multimedia Storage Servers: A Tutorial,"
"Multimedia System Architecture,"
"A File System for Continuous Media,"
"Principles of Delay Sensitive Multimedia Data Storage and Retrieval,"
"Designing File Systems for Digital Video and Audio,"
"Streaming RAID: A Disk storage System for Video and Audio Files,"
"Designing a Multi-User HDTV Storage Server,"
"Design and Analysis of a Grouped Sweeping Scheme for Multimedia Storage Management,"
"A Statistical Admission Control Algorithm for Multimedia Servers,"
"An Observation-Based Admission Control Algorithm for Multimedia Servers,"
Multimedia Systems and Techniques
"Algorithms for Designing Large-Scale Multimedia Servers,"
"A Dynamic Priority Assignment Technique for Streams with (m,k)-Firm Deadlines,"
"Statistical Properties of MPEG Video Traffic and Their Impact on Traffic Modeling in ATM Systems,"
--TR
Designing file systems for digital video and audio
Principles of delay-sensitive multimedia data storage retrieval
A file system for continuous media
News on-demand for multimedia networks
Streaming RAID
I/O issues in a multimedia system
A statistical admission control algorithm for multimedia servers
Issues in multimedia server design
Multimedia Storage Servers
A Dynamic Priority Assignment Technique for Streams with (m, k)-Firm Deadlines
Mon-Song Chen, Dilip D. Kandlur
--CTR
Ludmila Cherkasova , Wenting Tang, Providing resource allocation and performance isolation in a shared streaming-media hosting service, Proceedings of the 2004 ACM symposium on Applied computing, March 14-17, 2004, Nicosia, Cyprus
Harald Kosch , Ahmed Mostefaoui , Lszl Bszrmnyi , Lionel Brunie, Heuristics for Optimizing Multi-Clip Queries in Video Databases, Multimedia Tools and Applications, v.22 n.3, p.235-262, March 2004 | interval estimation algorithm;quality of service;multimedia storage server;admission control;future-Max algorithm |
317995 | An effective admission control mechanism for variable-bit-rate video streams. | For admission control in real-time multimedia systems, buffer space, disk bandwidth and network bandwidth must be considered. The CBR-based mechanisms do not use system resources effectively, since media data is usually encoded with VBR compression techniques. We propose an admission control mechanism based on a VBR data model that has a dynamic period length. In our mechanism, the period can be adaptively changed to maximize the performance, considering both disk bandwidth and buffer space. To compare the performance, extensive simulations are conducted on RR, SCAN, and GSS schemes which have the dynamic period length and the static period length. | Introduction
Multimedia systems like VOD (Video-On-Demand) systems require considerable resources
and have tight real-time constraints. Multimedia data to be sent to clients must be read from
disk to memory buffers before the actual transmission, and thus the maximum number of
clients that the system can support depends upon both disk bandwidth and total buffer size.
This work was supported in part by Korea Science and Engineering Foundation Grant(95-0100-23-04-3)
Video servers must be able to support heterogeneous clients that request vastly different QoS
(Quality of Service) parameters, such as display size, resolution, and display frame-rate. In
the process of accepting a new client (Admission Control), the server should not violate the
QoS of the streams already being serviced (Active Streams). To guarantee the QoS of the
active streams, the required resources must be reserved in advance by the admission control
mechanism.
In many admission control approaches, resources are reserved based on the worst-case assumption
of a peak-data-rate CBR (Constant Bit Rate) data model [3, 4, 9]. However, since
video data objects are usually generated with VBR (Variable Bit Rate) compression tech-
niques, CBR-based approaches tend to waste resources. There are some existing admission
control mechanisms which use a VBR data model to reserve buffer space. However, they do
not consider disk bandwidth or assume CBR data retrieval from the disk [17]. There is a close
relationship between disk bandwidth and buffer space, and an admission control mechanism
should consider both simultaneously. Although some existing mechanisms do consider both
buffer space and disk bandwidth, they are based on the CBR data model [3].
Many video streams are encoded using a VBR technique, in which the display data-rate
and the state of remaining resources varies constantly. Even if resources are reserved using
VBR characteristics, fewer streams than is optimal may still be accepted when using a static
period length. To maximize the number of acceptable streams, the period length should
be changed dynamically according to the state of the remaining resources. For example, if
the available buffer space is sufficient but the available disk bandwidth is low, then a long
period would show better performance than a short one, while if the available disk bandwidth
is sufficient but the available buffer space is low, then a short period would show better
performance.
To compare the trade-offs among several disk scheduling algorithms such as RR(Round
Robin), SCAN and GSS(Grouped Sweeping Scheme), we derived the equations for the distance
of head movement in one seek for each scheduling algorithm and analyzed the buffer
requirements. In this paper, we propose a new admission control scheme, which exploits the
characteristics of VBR data and which uses a dynamic time period in scheduling.
The rest of the paper is organized as follows: Related works are presented in section 2 and
the disk latency and the buffer requirements of several disk scheduling schemes are discussed
in section 3. Our admission control algorithm using dynamic period is proposed in section
4 and our simulation results are presented in section 5. Finally, our conclusions are given in
section 6.
Related Work
A VOD server should support several concurrent requests and should guarantee the Quality
of Service (QoS) of each request. Accepting a new client should not degrade the QoS of other
active streams. There are two approaches to admission control, optimistic and pessimistic [18].
In a pessimistic approach, resources are reserved using the worst-case (peak-rate) assumption.
This approach uses static resource reservation, so there is little overhead and no possibility of
overflow or starvation, but fewer clients are accepted than is ideally possible. In an optimistic
approach, a new request is accepted if the total resource requirements do not exceed the
total amount of available resources. This approach accepts more users than the pessimistic
approach, but QoS can be violated in some situations.
Scheduling algorithms play an important role in VOD systems, where concurrent streams
need to be read effectively [13, 14, 15, 19]. The simplest method is Round Robin (RR).
In RR scheduling, the streams are serviced in the same order in each service cycle(period).
It appears fair, needs little overhead in scheduling, and needs less buffer space than other
methods [5]. However, the disk head is required to move in a random manner, and the seek
latency may thus become large. The best known algorithm for real-time scheduling of tasks
with deadlines is the Earliest Deadline First (EDF) algorithm. In EDF scheduling, after
one media block has been accessed from the disk, the media block with the earliest deadline
is then scheduled for retrieval next. Scheduling of the disk head based solely on the EDF
policy, however, may yield excessive seek times and rotational latency, and may thus lead
to poor utilization of server resources [1]. A third algorithm, SCAN, operates by scanning
the disk head back and forth across the surface of the disk, retrieving a requested block as
the head passes over it [6]. One variant of this basic algorithm combines SCAN with EDF,
and is referred to as the SCAN-EDF scheduling algorithm [15]. In SCAN-EDF scheduling,
the requests with the earliest deadlines are served first, but if several requests have the same
deadline or deadlines lying closely together, then their respective blocks are accessed using
the SCAN algorithm. Clearly, the effectiveness of the SCAN-EDF technique is dependent on
how many requests have the same deadline. If all of the media blocks to be retrieved during
a period are assigned the same deadline, then SCAN-EDF is effectively reduced to SCAN.
Although SCAN scheduling schemes have shorter seek times (or latency times), they need
much more buffer space. Since the first stream serviced in the current period can be the last
stream in the next period, the amount of data stored in the buffer must be sufficient to last
for two periods if starvation is to be prevented. GSS (Grouped Sweeping Scheme) scheduling
divides the streams into several groups, which are each serviced RR style, while the streams
in each group are serviced SCAN style [21]. If the number of groups is large, GSS acts like
RR, while if the number is small, it acts like SCAN.
As observed in [4], the most natural way to process multiple streams simultaneously is
to interleave the readings of the streams in a cyclic fashion. As illustrated in figure 1, each
period (cycle or round) consists of a set of working slots and latency slots.
latency
working slot
slot
latency
slot
working slot
leftover
Tperiod
Figure
1: Periodic stream service
The amount of data that must be read from the disk for each stream within a working slot
should be sufficient for the client for one period (cycle). After one stream has read sufficient
data, a new stream starts to read the data, and there must be at least one seek to switch to
another stream. The latency slot includes these seek latencies and other scheduling overhead
latencies.
Chen et al. [3] proposed a scheme to find lower and upper bound of the period. Sufficient
data to be sent must be in the buffer in any period to service all clients without a hiccup. As
the service period becomes longer, each stream must read more data from the disk into the
buffer. However, due to the limited size of the buffer, the period can not be longer than a
certain maximum value and this is the upper bound(say, T max ). To read all the necessary data
in a short period, high disk bandwidth is required. Due to the limit of disk bandwidth, the
period can not be shorter than some minimum value and this is the lower bound(say, T min ).
The upper/lower bound can be calculated according to the allowable overflow/starvation
probability which is given as the QoS(Quality of Service) parameter. If we service clients with
a period between the lower bound and the upper bound, then the probability of starvation
or overflow will be less than the given QoS value. To support all active streams with the
current available buffer space and disk bandwidth, the period length(T period ) must satisfy the
following inequality:
This paper shows clearly the period length is affected by both the buffer space and the
disk bandwidth. In [3], it was assumed that playback data rate(R c ) is constant (CBR data
model, based on a pessimistic approach) making it very easy to calculate the T min and T max .
But, since a pessimistic approach was used, system resource utilization was quite low.
Some buffer management schemes using VBR characteristics have been suggested. In
NOD (News On Demand) [20], the length of a stream is short and the requests are in bursts.
If each stream is serviced in a cyclic manner based on CBR, then there can be spare resources
reserved but unused. These spare resources can be used for a current stream being serviced,
shortening the service time for the current stream. A new request can then be accepted more
quickly. Basically, however, the NOD scheme uses worst-case assumption and cannot achieve
satisfactory system utilization.
The pessimistic approach reserves buffer space according to the maximum frame size of all
of the frames during the lifetime of a stream. If we divide the video data object into several
small parts, find the largest size in each part, and then reserve the buffer using this local
maximum, system utilization can be improved. However, if we divide the video object into
parts that are too small, the calculation overhead can become very large. In [17], the video
data object is divided by the window size (say, 30 frames), the maximum size of frames in
each window is found, and the buffer is reserved using each local maximum. However, there
is no exact assumption of disk bandwidth.
3 System Modeling
Buffer space and disk bandwidth are the most important factors that determine the number
of streams that a VOD server can accept. However, full disk bandwidth can not be used, due
to latency. Seek latency is the largest part of the latency time, and the number of cylinders
that the disk head should move in one seek is the major part of seek latency. We therefore
model the seek distance first. In our system model, we assume the network has an infinite
bandwidth.
3.1 The number of cylinders to be crossed in one seek
According to [11, 2] seek time is maximized, under a realistic function for the seek time,
for equidistant seek positions of the n requests. The seek time function itself is assumed
to be proportional to the square root of the seek distance for small distances below a disk
specific constant, and a linear function of the seek distance for longer distances, which is in
accordance with the studies of [16]. Thus, for given disk parameters, the maximum total seek
time of a sweep can be easily computed by assuming the n seeks positions to be at cylinders
(i is the total number of disk's cylinders, and applying
time function. This computation yields an upper bound for the seek time [8]. That is, if
we use the following formula as the number of cylinders to be crossed in one seek, then there
will be almost no starvation caused by the seek latency.
The Number of Cylinder in one
To be adapted for GSS scheduling, equation ( 2) should be extended. We had better
divide the streams into groups of equal size(except for the last group) to reduce the seek time
skewness among the groups. If we assume that streams are divided into g groups, then the
number of members in a group will be round( n
) and the number of members in the last group
will be
In a GSS scheme, SCAN scheduling is used for all streams in a group, so by ( 2), we get the
following formula.
c
Since g is the same as n in RR scheduling, the number of cylinder in one seek will be
c
. Kiessling et al [7] showed that the expectation(or average) of the number of cylinders to
be crossed in one seek is c
3 in RR scheduling scheme. If we use c
3 instead of c
2 in modeling
the seek distance, then we may utilize disk bandwidth more effectively. However, since the
probability of starvation will increase much higher, we used cinstead of c3.2 Comparison of the buffer usage factor for various scheduling schemes
In RR scheme, the order in which streams are serviced does not change from one period to
another. Thus, we need to load data sufficient for only one period(say T). But in SCAN
scheme, the order of service depends on the physical positions of the logical blocks. In the
worst case, the stream serviced first in the current period can be serviced last in the next
period. Since we do not know the order in advance, the total amount of data able to be read
into the buffer should be sufficient for 2 periods (say 2T ) to guarantee problem-free playback.
This means that 50% of the buffer may be wasted in the SCAN scheme, as compared to the
RR scheme. In the GSS scheme, every stream in one group is serviced by SCAN scheduling,
but each group is serviced by the RR scheme. If the number of groups is g(g ! n), then the
total amount of data to be loaded into buffer should be sufficient for
. We can therefore
say that we waste T
buffer space. If then the wastage is the same as for SCAN,
and if the same as for RR 1 . Figure 2 illustrates this reasoning.
Period 2
Period 1
(c) GSS
Period 1 Period 2
(b) SCAN00000000000000000111111111111111110000111100001111
Period 2
Period 1
(a) RR
Figure
2: Maximum time between reads for RR, SCAN, and GSS
4 Proposed Admission Control Algorithm
Most earlier works used the fixed time period and CBR data modeling. Buffer space, disk
bandwidth and the sum of the playback data-rate of all active streams are the three most
important factors in choosing the length of a period. As the playback data-rate of an active
stream does not vary during playback in the CBR scheme, we do not need to change the
length of the period. These approaches are simple but inefficient when used with VBR data.
When servicing VBR data, the optimal length of the period would vary frequently according
to the playback, depending on which streams are being played or which parts are being played.
Since the static period length causes the system to accept less streams than is possible, we
propose to use dynamic period length to maximize efficiency.
1 If we look at figure 2 more closely, then we can find the maximum time differences between two reads in
RR, SCAN and GSS scheduling are T , respectively where jsj is the length of a slot
time(sum of a latency slot and a working slot). When does not look the same as T
But since T=n ' jsj, T is almost the same as T
We should find some schedulable set of periods to accept a new stream. Assuming that
current time period is T i , we can get T min and T max by the mechanism given in [3]. If
(schedulable), we can take some value between T min and T max as the next time
period, T i+1 , and add this period to the period set ST (ST is first initialized with empty set).
There are many choices in taking T i+1 , but we use Tmin+Tmax
2 as T i+1 intuitively and start
the step to get T i+2 with T i+1 . If T min ? T max (unschedulable), the new stream can not be
accepted. We can accept the new stream if there is no unschedulable period until the sum of
period s in ST becomes larger than the length of the playback time of the stream. Assuming
that we decide to accept a stream after the k-th step, ST will be as follows.
As stated in [3], the average playback rate of each stream is required to calculate the
T min and T max . Since they used CBR data model, the playback rate is constant all the time.
However, it varies dynamically from period to period. Let Rc i (j) be the average playback
data-rate of stream j at some period T i and t be the start time of the next period. Then
(j), the data-rate of stream j at T i , is the sum of all frames belonging to T i divided by T i
and the data-rate of the next period is the sum of all frames in [t; t
Since T i+1 is not known for the current period, we can not get an accurate data-rate for T i+1 ,
so we approximate T i+1 using T i . We add all the frames in [t; t divide that by T i to
get d
(j)(approximation of Rc i+1 (j)). Since we have d
now, we can get T min and
finally we can get T i+1 . However, the error from approximating T i+1 , can result in
either starvation or buffer overflow.
Let us consider the case where T i is larger than T i+1 and let the difference between T i and
T i+1 be \DeltaT . Assume that the average size of frames in smaller than the
average of frames in [t; t as shown in figure 3(a), the approximated data-rate,
d
Rc i+1 (j), is smaller than the actual data rate Rc i+1 (j). This means that the total amount of
resources required is underestimated and more resources may be needed when playing-back.
In the case that T i is smaller than T i+1 , if the average size of the frames in [t; t +T i ] is smaller
than the average of frames in [t +T then as shown in figure 3(b), the approximated
data rate, d
Rc i+1 (j), is smaller than the actual data rate Rc i+1 (j). This means that the total
amount of resources required is underestimated, and it might result in the same problems.
To cope with these problems, we reserve a small fraction of resource to overcome starvation
and overflow problems. From our simulation, 1% of disk bandwidth and 5% of buffer space
was sufficient to guarantee that neither starvation nor overflow occurs. Even if we do not
leave any spare resource (100% of resources are allocated), the probability of starvation or
overflow is less than 0.05% as shown in section 5.
Rc (j)
Rc (j)
Data
Rate
(a) (b)
Data
Rate
Time
Figure
3: The difference between real playback data rate and approximated rate
Our admission control algorithm based on VBR data modeling and dynamic period length
is given below.
boolean AcceptanceTest(New
/* Initialize variables */
Length of Current P eriod;
Starting Time of Next P eriod; /* End Time of Current Period */
T in ST ! P layback Time of New
Add all frame sizes of Active Streams and New Stream in [t; t
Get
of Rc(j)s in Active Streams and New Stream */
Get T min and T max using
else f
break;
return AcceptNewStream;
If AcceptanceTest() returns FALSE, the new stream can not be accepted, and we may retry
at either the next time period or a few periods later. If AcceptanceTest() returns TRUE, the
new stream can be serviced from the next period using the new period set obtained during
the test. Time complexity of the algorithm is O(m \Delta n), where m is the number of active
streams and n is the number of frames in the stream being tested.
5 Experimental Evaluation
5.1 Assumptions
Most existing storage-server architectures employ random allocation of blocks on a disk. Since
the latency between the blocks of a media object is unpredictable, this type of organization
is insufficient to meet the real-time constraints of a multimedia application. Contiguous disk
block allocation yields the highest effective disk bandwidth, but has the penalty of costly
reorganization during data insertion and updates. However, since video data objects are
seldom modified, contiguous allocation is not an unreasonable assumption in a VOD system
contiguous allocation is impossible, then the use of blocks of the maximum size possible is
best). Even if we store the data on contiguous blocks, latency is inevitable, because a VOD
server must support several streams concurrently. At least one seek is necessary to switch to
another stream.
In this paper, we use Round Robin, SCAN, and GSS as scheduling schemes. In modeling
the disk characteristics, we use the formulas in section 2 and parameters are retrieved from
a Seagate Wren 8 ST41650N disk. Head switching time(time to change head to the another
time to cross tracks and rotational latency are considered for
the simulation. Star-wars MPEG I trace data (frame size data) [23] and Red's Nightmare
MPEG I trace data are used to produce the workload. To generate different video streams,
we choose one type of trace data and generate two random numbers between 0 and 1. The
smaller generated number is used to indicate the starting point of the stream and the larger is
used to indicate the end point. For example, assume that the Star-wars data is selected and
the random numbers 0.2 and 0.5 are generated. If Star-wars consists of 1000 frames, then a
new video stream is constructed from the data between the 201st and 500th frames. These
video streams need 1.5 Mbps display data-rate (peak-rate), but actually the mean data-rate
of Star-wars is 670 Kbps and that of Red's Nightmare is 600 Kbps. More than 50%(in case
of Red's Nightmare more even 60%) of resources are wasted if we use CBR data modeling
(worst-case assumption). We have conducted the simulations with the traces from [22] but
the results show little differences. We used an Ultra Sparc 1 workstation running Solaris 2.5
for the simulation.
5.2 Experiment Method
In our experiment, we have compared the performance between the VBR scheme and the
CBR scheme. The VBR scheme employs the dynamic length of period and the CBR scheme
employs the static length of period. We used RR (Round Robin), SCAN, and GSS (Grouped
Sweeping Scheme) as the scheduling methods for the VBR scheme. RR and SCAN methods
were used for the CBR scheme. Since the difference between them was negligible, only the
results of the SCAN-CBR scheme are presented for simplicity. For each scheme, the average
number of streams accepted was used as the performance metric. The following abstract
mechanism was used for 1 million seconds to get the average number of accepted streams
for each scheme. Assume that there is at least one stream waiting for service in the FIFO
(first in first out) queue and that the system accepts as many streams as possible and services
them. If no more streams can be accepted, the time is passed to the next period until a new
stream is accepted. In some periods, certain streams may be over, and some streams may be
accepted. The mean number of accepted streams in a period is computed through executing
each scheme for a million seconds.
The admission control algorithm in section 4 is changed to evaluate the effect of static
time period on the performance. For a given time period T s (unchanged during a run), we
can get T min and T max . If any one of the 3 conditions holds, the stream can not be accepted.
1.
2.
3.
Since a stream can be accepted in the dynamic period length scheme, unless it meets condition
above, the dynamic scheme shows better performance.
5.3 Results
To evaluate the performance of the different admission control schemes, we have conducted
simulation with varying buffer size. Figure 4 shows the mean number of streams accepted. In
this figure, S-CBR means the CBR scheme with static period length, DV-RR means dynamic
period length VBR scheme with RR disk scheduling, and DV-GSS means dynamic period
length VBR scheme with GSS disk scheduling. The main conclusion that may be drawn
from the figure is that dynamic period length VBR schemes accept more than twice as many
streams as the CBR scheme does. While the playback data-rate (R c ) of a stream is given at
admission control time in the CBR scheme(by worst case assumption), it varies from period
to period in the VBR schemes. Since the SCAN scheme gets some gain in disk bandwidth,
but buffer utilization is poor while the RR scheme uses the buffer more efficiently, but much
more disk bandwidth is wasted, there is difference among the dynamic period length VBR
schemes. DV-GSS, with 5 groups, shows the best throughput among the dynamic VBR
schemes. However, the improvement is very small (at best, less than 3%).152535
Mean
Number
of
Accepted
Streams
Buffer Size (M Bytes)
DV-GSS with 5 groups
"DV-RR"
"DV-SCAN"
"S-CBR"
Figure
4: Average Number of Accepted Streams for Various Strategies
The major factor that affects the lower limit of period length (T min ) is the effective disk
bandwidth (as disk bandwidth increases, the lower limit decreases), and the major factor
which affects the upper limit of period length (T max ) is the effective buffer space (as buffer
space increases, the upper limit increases). Period length is also influenced strongly by the
sum of the data-rates of all active streams. Table 1 shows the average time periods for
various schemes. Since consumption rate does not vary with time in the S-CBR scheme, we
need only one calculation to determine the time period, and we can use the same period,
if no new streams are accepted. In the new dynamic VBR schemes, time periods may not
be constant, since we use dynamic period length. The DV-RR scheme tends to have longer
periods than the DV-SCAN scheme.
Even when we use the VBR characteristics for resource reservation, we cannot achieve
good performance if we also use a static time period (no changes in the length of period). If
we use a static time period, then the throughput varies with the length of the period (worse
Buffer Size S-CBR DV-RR DV-SCAN DV-GSS
Table
1: Average Period Length for Various Schemes - using "Star-wars Trace"
than the dynamic schemes in all cases). For example, with a fixed period of 2000 ms, SV-RR
(static period VBR scheme with RR scheduling) accepts streams, and SV-SCAN (static
period VBR scheme with SCAN scheduling) accepts 23 streams (using 16M buffer space),
while with a fixed period of 5000 ms, SV-RR accepts 26 streams, and SV-SCAN accepts 19
streams. It is obvious that if we use some value close to the average time period (as indicated
in
Table
1) for the fixed time period, then the throughput is reasonable, but otherwise it is
very poor.
In our algorithm, a new stream is rejected if at least one non-schedulable period exists
we decide that the time period is close to the optimal length, the
possibility of rejection is higher than with dynamic period schemes. Figure 5 shows the mean
number of maximum accepted streams using a static time period with a VBR scheme.
Since we use approximated values to obtain the length of the next period (T i+1 ), starvation
or overflow can occur. In our simulation, the starvation probability is less than 0.5% (one
starvation in 200 periods) and the overflow probability is less than 5%. When we reserve 5%
of buffer space, the probability of overflow becomes so minor(we observed no overflow in our
simulation). While 0.5% of starvation is acceptable, if we reserve 1% of disk bandwidth, then
the probability of starvation decreases almost to 0. Figure 6 shows the starvation probability
with varying spare resource(buffer) reservation. Even if we reserve 5% of resource, there is
only a 3% degradation in throughput. This means that dynamic VBR schemes still accept
twice the number of streams accepted by static CBR schemes.
Mean
Number
of
Accepted
Streams
Length of Period (Static Scheme) - unit : ms
"SV-SCAN"
"SV-GSS(g=2)"
"SV-GSS(g=5)"
"SV-RR"
Figure
5: Mean of Maximum Accepted Streams in Case of Static Period Length
Starvation
Probability
Reserved Buffer Space (%)
Figure
Starvation Probability with Varying Spare Resource Reservation
6 Conclusions
In this paper, we have presented an effective admission control scheme for VBR data, which
considers both buffer space and disk bandwidth. To compare the trade-offs among several
scheduling algorithms, we have derived the equations regarding the distance of head movement
in one seek for each scheduling algorithm and we have analyzed the buffer requirements for
those algorithms. Since most existing control mechanisms use a static period scheme, the
resources of the server tend to be wasted. If a static period length is far from optimal,
then the throughput decreases significantly. In a dynamic time-period scheme, the period is
changed dynamically, based upon the available disk bandwidth and the buffer space, so as to
maximize the throughput. Our experiments show that there is little difference in performance
among VBR schemes if we use dynamic periods. Although some spare resources are reserved
(5% of buffer and disk bandwidth) for starvation or overflow, VBR schemes can still support
twice as many streams as a CBR scheme and experience neither starvation nor overflow.
Our VBR-based scheme needs more calculation overhead than a CBR scheme, but if frame-
size data is merged by some unit (say, the summation of then the overhead will be
significantly reduced (to almost 1/30). The largest part of the calculation is the summation
of frame sizes. Since CPU performance is increasing faster than that of memory or disks,
optimizing memory utilization and disk bandwidth is becoming increasingly important.
We use a single disk model but disk arrays are currently used in many systems. If we
use a simple disk array configuration, in which data are perfectly striped on all the disks, the
entire collection of disks can be treated, logically, as a single disk unit and we can replace
disk bandwidth R with
i in disk array R i as in [10, 12]. We will extend our scheme in order
that more complicated disk arrays can be used.
--R
A file system for continuous media.
Effective memory use in a media server.
Storage allocation policies for time-dependent multimedia data
Principles of delay-sensitive multimedia data storage and retrieval
Multimedia storage servers: A tutorial and survey.
On configuring a single disk continuous media server.
Access path selection in databases with intelligent disc subsystems.
Stochastic service guarantees for continuous data on multi-zone disks
Maximizing buffer and disk utilization for news-on- demand
An analysis of buffer sharing and prefetching techniques for multimedia systems.
A tight upper bound of the lumped disk seek time for the SCAN disk scheduling policy.
Disk striping in video server environments.
Dynamic task scheduling in distributed real-time systems
Efficient storage techniques for digital continuous multime- dia
I/O issues in a multimedia system.
An introduction to disk drive modeling.
A dynamic buffer management technique for minimizing the necessary buffer space in a continuous media server.
A statistical admission control algorithm for multimedia servers.
Scheduling algorithms for modern disk derives.
Design and analysis of a grouped sweeping scheme for multimedia data.
ftp://ftp.
--TR
Access path selection in databases with intelligent disc subsystems
Principles of delay-sensitive multimedia data storage retrieval
A file system for continuous media
An introduction to disk drive modeling
I/O issues in a multimedia system
Scheduling algorithms for modern disk drives
A statistical admission control algorithm for multimedia servers
A tight upper bound of the lumped disk seek time for the Scan disk scheduling policy
On configuring a single disk continuous media server
An analysis of buffer sharing and prefetching techniques for multimedia systems
Stochastic service guarantees for continuous data on multi-zone disks
Multimedia Storage Servers
Storage Allocation Policies for Time-Dependent Multimedia Data
Efficient Storage Techniques for Digital Continuous Multimedia
Effective Memory Use in a Media Server
Maximizing Buffer and Disk Utilizations for News On-Demand
Mon-Song Chen, Dilip D. Kandlur
Disk striping in video server environments
Consumption-Based Buffer Management for Maximizing System Throughput of a Multimedia System
A Dynamic Buffer Management Technique for Minimizing the Necessary Buffer Space in a Continuous Media Server
--CTR
Roger Zimmermann , Kun Fu, Comprehensive statistical admission control for streaming media servers, Proceedings of the eleventh ACM international conference on Multimedia, November 02-08, 2003, Berkeley, CA, USA
Jin B. Kwon , Heon Y. Yeom, Generalized data retrieval for pyramid-based periodic broadcasting of videos, Future Generation Computer Systems, v.20 n.1, p.157-170, January 2004 | video-on-demand systems;disk scheduling;buffer management;admission control;variable bit rate |
318117 | Using Traffic Regulation to Meet End-to-End Deadlines in ATM Networks. | AbstractThis paper considers the support of hard real-time connections in ATM networks. In an ATM network, a set of hard real-time connections can be admitted only if the worst case end-to-end delays of cells belonging to individual connections are less than their deadlines. There are several approaches to managing the network resources in order to meet the delay requirements of connections. This paper focuses on the use of traffic regulation to achieve this objective. Leaky buckets provide simple and user-programmable means of traffic regulation. An efficient optimal algorithm for selecting the burst parameters of leaky buckets to meet connections' deadlines is designed and analyzed. The algorithm is optimal in the sense that it always selects a set of burst parameters whose mean value is minimal and by which the delay requirements of hard real-time connections can be met. The exponential size of the search space makes this problem a challenging one. The algorithm is efficient through systematically pruning the search space. There is an observed dramatic improvement in the system performance in terms of the connection admission probability when traffic is regulated using this algorithm. | Introduction
There is a growing interest in the application of ATM networks for distributed Hard Real-Time
(HRT) systems. In a distributed HRT system, tasks are executed at different nodes and communicate
amongst themselves by exchanging messages. The messages exchanged by time-critical
tasks have to be delivered by certain deadlines for successful operation of the system. Examples of
such systems include supervisory command and control systems used in manufacturing, chemical
processing, nuclear plants, tele-medicine, warships, etc. This paper addresses the issue of guaranteeing
end-to-end deadlines of time-critical messages in ATM networks that support distributed
HRT systems.
Since ATM is a connection-oriented technology in which messages are packetized into fixed-size
cells, guaranteeing message deadlines is tantamount to ensuring that the worst case end-to-
end delay of a cell does not exceed its deadline. To provide such guarantees, three orthogonal
approaches can be taken:
1. Route selection for connections;
2. Output link scheduling at ATM switches;
3. Traffic regulation at the User Network Interface (UNI).
The first approach is to select appropriate routes for connections such that the delays are
within bounds. Since typical ATM networks for HRT applications are LANs, the scope of this
approach is limited. The second approach focuses on scheduling at the ATM switches' output
links where traffic from different connections is multiplexed. Drawing on the similarities with
CPU scheduling, classical real-time scheduling policies such as the First Come First Serve (FCFS),
Earliest Deadline First (EDF), Generalized Processor Sharing (GPS), Fair Queuing (FQ), etc., are
employed [2, 12, 13, 16, 23, 24, 25]. However, most commercially available switches use a high
priority queue for HRT connections, and this queue is served in a FCFS manner.
In this paper, we focus on the third approach. That is to control the network delays by regulating
the input traffic of each connection. Regulating the input traffic can smooth the burstiness
of the traffic, which tends to reduce the adverse impact of burstiness on the end-to-end delays of
other connections. Most of the existing ATM networks provide for traffic regulation at the UNI.
It is relatively easy to tune the regulation parameters as desired. This justifies our focus on using
the traffic regulator as an access control mechanism for HRT ATM networks. It must be noted
that all the three approaches are important and they complement each other. Our results of using
traffic regulation complement the previous work on route selection and output link scheduling.
The idea behind traffic regulation is to regulate a connection's traffic so that it has a lower
impact on the delays of cells of other connections. When two or more connections are multiplexed
on a single link, an increase in burstiness of one connection's traffic adversely impacts the delays of
the cells of others. Regulating a connection's traffic makes the traffic less bursty, thereby reducing
the delays of other connections' cells. However, regulating a connection's traffic may delay some
of its own cells. Therefore, it is important to choose an appropriate degree of traffic regulation so
that all connections can meet their end-to-end deadlines.
In this paper, we study the impact of input traffic regulation on the worst case end-to-end
delays experienced by cells of a set of hard real-time connections. In particular, we consider the
bucket regulator. The degree of regulation of a connection depends on the leaky bucket
parameters assigned to it. The degree of traffic regulation chosen for a connection affects not only
the delays of that connection, but also those of others sharing resources with that connection.
Thus, the leaky bucket parameters for the entire set of connections must be carefully assigned to
ensure that every connection meets its end-to-end deadline. Berger proposed an analysis model of
bucket regulators in [1]. In his work, Berger examined a single leaky bucket, analysing the job
blocking probabilities (or system throughput) versus the parameters such as job arrival patterns,
capacity of token bank and size of job buffer. Different from Berger's work in [1], we consider the
interactions of a set of leaky bucket regulators, searching for a vector of burst parameters of the
bucket regulators to meet the end-to-end deadlines of all HRT connections.
Our algorithm of searching burst parameters of leaky bucket is optimal in terms that it can
find the vector of burst parameters whose mean value is minimal and by which the delay requirements
of all HRT connections can be met, whenever such an assignment exists. Our algorithm is
computationally efficient and can be utilized during connection setup. The results presented in
this paper are directly applicable to currently available ATM networks without making any modifications
to the hardware and are compatible with the proposed ATM standards. We evaluate the
system's capability to support hard real-time connections in terms of a metric called admission
probability [18]. Admission probability is the probability of meeting the end-to-end deadlines of a
set of randomly chosen connections. We have observed that the admission probability increases
with a proper choice of leaky bucket parameters at the UNI.
While we focus on traffic regulation for meeting end-to-end deadlines, our work also complements
much of the previous studies which essentially concentrate on designing and analyzing
scheduling policies for ATM switches [2, 5, 6, 8, 10, 11, 12, 13, 16, 20, 21, 23, 24, 25]. A modified
FCFS scheduling scheme called was proposed and studied in [2]. The switch scheduling
policy called "Stop and Go" is presented in [8]. A virtual clock scheduling scheme in which cells
are prioritized by a virtual time stamp assigned to them, is discussed in [25]. The use of the Earliest
Deadline First scheduling in wide area networks has also been investigated [6]. [23] discusses
scheduling at the output link by introducing a regulator at each link of an ATM switch. [12]
uses the rate-monotonic scheduling policy in which the input to the scheduler is constrained by
regulating the traffic of each connection traversing the scheduler. Scheduling policies based on fair
queueing and its derivations are discussed in [5, 16]. In our analysis of network delays, we assume
the output link scheduling policy used is FCFS. However, our analysis and methodology can be
easily applied to systems using other scheduling policies.
The outline of the rest of the paper is as follows. Section 2 defines the system model. Section
3 develops a formal definition of the traffic regulation problem for HRT ATM networks. Section
presents our algorithm to select the leaky bucket parameter values. Performance results are
presented in Section 5. We conclude in Section 6 with a summary of results.
Host 4
Host 1 Host 2
Host 3
Switch A Switch B
Switch D Switch C
Network
Interface
Card
Network
Interface
Card
Network
Interface
Card
Network
Interface
Card
From other
hosts or
switches
From other
hosts or
switches
From other
hosts or
switches
From other
hosts or
switches
To other
hosts or
switches
To other
hosts or
switches
To other
hosts or
switches
To other
hosts or
switches
Figure
1: ATM network architecture
2 System model
In this section, we present the network model, the connection model and the traffic descriptors
used to specify the worst case traffic pattern of HRT connections.
2.1 Network model
Figure
1 shows a typical ATM LAN. In ATM networks [4, 9, 22], messages are packetized into
fixed-size cells. The time to transmit a single cell is a constant denoted by CT . We assume that
time is normalized in terms of CT . That is, in this paper time is considered a discrete quantity
with the cell transmission time (CT ) being taken as one time unit.
ATM is a connection-oriented transport technology. Before two hosts begin communication, a
connection has to be set up between them. Figure 2(a) shows a sequence of network components
that constitute a typical connection (illustrated by a connection from Host 1 to Host 2 in Figure 1).
Cells of a connection pass through a traffic regulator at the entrance to the network (the User
Network Interface or UNI) and then traverse one or more ATM switches interconnected by physical
links before reaching their destination host.
Cell Processing
Network
Interface
Card
Input
Switch Fabric
Switch A
Input
Switch Fabric
Switch B
Cell Processing
Network
Interface
Card
Output
Output
From Host 1
To Host 2
Constant Delay Server
From Host 1
To Host 2
VariableDelay Server
Constant Delay Server
VariableDelay Server
Constant Delay Server
Constant Delay Server
Constant Delay Server
VariableDelay Server
Constant Delay Server
Constant Delay Server
Constant Delay Server
Constant Delay Server
(a) The devices and links traversed by
the connection
(b) The sequence of servers traversed by
the connection
Figure
2: Connection decomposition into servers
In most ATM networks, the traffic is regulated at the source using leaky buckets. A leaky bucket
regulator consists of a token bucket and an input buffer. The cells from the source associated
with the leaky bucket are buffered at the leaky bucket. A pending cell from the input buffer is
transmitted if at least one token is available in the token bucket. Associated with each leaky
bucket regulator are two parameters: the burst parameter and the rate parameter. The burst
parameter, denoted by fi, is the size of the token bucket, i.e., the maximum number of tokens that
can be stored in the bucket. The rate parameter, denoted by ae, is the token generation rate in the
bucket. The number of cells that may be transmitted by a leaky bucket regulator in any interval
of length I is bounded by fi
An ATM switch multiplexes a number of connections onto a physical link. The cells of connections
being multiplexed are buffered at the controller of the output link. In most commercially
available switches, cells of connections with stringent delay requirements (i.e., Class A traffic) are
buffered in a high-priority queue and served in an FCFS order. Hence, in this paper, we consider
FCFS scheduling policy for HRT connections.
2.2 Connection model
To support HRT applications, the network must guarantee that all cells from a given set of
connections are transmitted before their deadlines. We will use the following notations concerning
a set of HRT connections. Hereafter, we will omit the qualifier "HRT" for connections since we
only deal with HRT connections.
ffl N denotes the total number of connections in the system.
ffl M is the set of N connections competing for resources within the ATM network:
connection M i is specified by the following information:
- Source address,
Destination address,
Connection route, 1
An upper bound on average cell arrival rate,
Worst case input traffic characteristics.
D is a vector that specifies the end-to-end deadlines of connections in M:
~
where D i is the end-to-end deadline of a cell of connection M i . That is, if a cell arrives at
the source at time t then it should reach at the destination by t +D i .
In a connection (see Figure 2(a)), each of the network components traversed by a connection's
cells can be modeled as a server. Thus, a connection can be considered to be a stream of cells being
served by a sequence of servers [3, 18]. Servers can be classified as constant delay servers and
variable delay servers. A constant delay server is one that offers a fixed delay to every arriving
cell. For example, physical links are considered as constant delay servers. On the other hand, cells
may be buffered in a variable delay server and hence suffer queueing delays. The leaky bucket
traffic regulator and the FCFS output link schedulers at ATM switches are examples of variable
delay servers. Figure 2(b) shows the logical representation of the connection in Figure 2(a).
The traffic pattern of a connection at a point in the network is characterized by a traffic
descriptor . The traffic at the source of a connection is the raw traffic (unregulated) generated by
applications. It is described by the periodic descriptor (C; P ), which means that a maximum of
C cells may arrive at the connection in any interval of length P . The periodic descriptor is very
general to describe real-time traffics at the application level. The classical periodic or synchronous
To ensure system stability, we assume that connection routes do not form loops [3, 16].
traffic (i.e., C contiguous cells arriving at the beginning of every period of length P ) is a special
case of this kind of traffic. Most hard real-time traffic (at source) is assumed to be synchronous
[14, 15], and hence is adequately specified by this traffic descriptor.
The raw traffic of a connection is regulated by the leaky bucket regulator before it gets into
the network. After being regulated by a leaky bucket with parameters (fi, ae), the traffic pattern
becomes any interval of length I. After the regulation, the traffic traverses through
ATM switches. The traffic pattern will becomes more and more irregular as cells are multiplexed
and buffered at the switches. For the description of more general traffic patterns, we use rate
function descriptor [17, 18], \Gamma(I ), to describe the traffic after leaky bucket regulation. \Gamma(I )
specifies the maximum arrival rate of cells in any interval of length I. That is, a maximum of
I \Delta \Gamma(I ) cells of the connection may arrive in any interval of length I. \Gamma(I ) is general to describe
any traffic pattern. For example, the traffic pattern described by fi can be expressed by
the rate function in (61) in appendix A.1.
In the rest of the paper, the following notations are used to describe traffics at different network
points:
(C in server
traffic to the leaky bucket of connection M i .
(\Gamma in server
i;j (I)) and (\Gamma out server
output traffic at FCFS server j of connection M i .
2.3 Delay computations
The set of connections M is admissible in an ATM network if and only if the worst case delays
of cells do not exceed their deadlines. Let ~ d be a vector whose components are the worst case
end-to-end delays for connections in M; that is,
where d i is the worst case end-to-end delay experienced by a cell of connection M i .
the relation "" on vectors as follows. Let
only if
With this relation, M, the set of HRT connections, is admissible if and only if
~ d ~
Hence, to check whether M is admissible, we need a systematic method of computing the
worst case end-to-end delay experienced by a cell of each connection.
Consider connection M i , M i 2 M. To compute d i , we need to investigate the delays in every
network component traversed by a cell of M i . Now, we can decompose d i into three parts as
follows:
Switch B
Switch A
Bucket
Bucket
Figure
3: Experimental setup
1. d const
i denotes the summation of the delays a cell suffers at all the constant delay servers in
its connection route.
2. d lb
i denotes the worst case queueing delay experienced by a cell at the leaky bucket which
regulates M i 's traffic. When M i 's traffic is not regulated, d lb
i is 0.
3. d net
i denotes the summation of the worst case delays a cell suffers at all the variable delay
servers after the leaky bucket. d net
i can be obtained by decomposing it further as follows.
Suppose the route of connection M i traverses switches (FCFS servers)
be the cell delay of connection M i at FCFS server j. d net
can thus be expressed as:
d net
d i , the worst case end-to-end delay for M i , can now be obtained as
const
const
Since d const
i is a constant, we focus on obtaining upper bounds on d lb
i and d fcfs
i;j .
3 Problem definition
In this section, we formally define the problem of leaky bucket parameter selection for HRT ATM
networks. To motivate the discussion, we first examine some experimental data. We consider a
simple network consisting of two ATM switches (see Figure 3). Each ATM switch has 2 input
lines and 2 output lines. There are 3 connections in the system, M
connections share the same output link at the second switch (switch B). Connections M 1 and M 2
enter the network at switch A and traverse through switches A and B, while connection M 3 enters
the system at switch B and traverses switch B only. The three connections carry identical streams
of video data. The video source under study is a two hour encoding of the movie "Starwars" [7].
The video source provides a total of 171,000 frames, at the rate of 24 frames per second.
bEnd-to-End
delay
(cell
units)
Figure
4: Experimental results
In this experiment we vary fi 1 (the leaky bucket burst parameter of M 1 ) while keeping fi 2 and
burst parameters of M 2 and M 3 , respectively) constant. Figure 4 shows that
as increases, the delay (measured in CT units) experienced by M 1 tends to decrease. However,
when fi 1 is increased, the delay experienced by M 3 tends to increase. Further, we observe that for
the delays experienced by M 1 and M 3 reach constant values. An intuitive explanation
of these results is provided below.
An increase in fi 1
tends to increase the burst size of M 1 's traffic into the network. When a
larger burst size is allowed at the output of M i 's leaky bucket, the need to buffer M i 's cells within
the leaky bucket decreases. This tends to lower connection M 1 's delay as fi 1
is increased. On the
other hand, the increased burstiness of M 1 as fi 1
is increased adversely impacts connection M 3 's
traffic, increasing M 3 's worst case cell delay. However, for large values of fi 1
? 1400), the
delays of either connection are unaffected by any increase in fi 1
. This is because, at such large
values of fi 1 , the burst size allowed for M 1 is so high that no cells of M 1 are queued at the leaky
bucket input buffer, i.e., there is virtually no traffic regulation on M 1 's traffic. In this experiment,
traffic was not monitored, because it is expected to exhibit the same trend as M 3 when fi 1
increases.
Clearly, the experimental results shown in Figure 4 indicate that the choice of fi i s for different
connections plays a critical role in the worst case cell delays experienced by all the connections.
Before the formal definition of the problem, we need some notations:
ffl ~ae is the rate vector , i.e.,
ae i is the rate parameter assigned to the leaky bucket regulating connection M i at its
network interface. We assume that ae i is assigned a value equal to the long term average cell
arrival rate of M i .
ffl ~ fi is the burst vector , i.e.,
where fi i is the burst parameter assigned to the leaky bucket regulating M i 's traffic.
ffl m( ~ fi) is the norm of the burst vector ~ fi, i.e.,
Since we analyze a slotted system, fi i takes positive integers only. Therefore, the minimal
value of m( ~ fi) is N .
ffl ~ d ( ~ fi) is the worst case end-to-end delay vector for a given selection of ~ fi, i.e.,
~ d ( ~
where d i ( ~ fi) is the worst case cell delay of connection M i when ~ fi is the chosen burst vector.
Using (7), d i ( ~ fi) can be expressed as
const
const
where d lb
are the worst case queueing delays at the leaky bucket and at
th FCFS server, respectively, when the burst vector is ~ fi. The computation of d lb
d fcfs
are given by (60) and (63) respectively in Appendix A.
ffl AM is the set of burst vectors for which the connection set M is admissible, i.e.,
Our main goal is to meet (5), i.e., to ensure that the end-to-end deadlines of all the connections
in a given set are met. We have chosen traffic regulation as our means of achieving this objective.
In terms of the above notations, given a set of HRT connections, M, the method must find vector
~ fi that belongs to AM . We are going to design and analyze a ~ fi-selection algorithm for this
purpose. Such an algorithm will take connection set M as its input and return vector ~ fi as its
output.
Clearly, when AM is empty, no assignment of ~ fi can make the connection set admissible. Our
parameter selection algorithm will return an all-zero ~ fi when AM is empty. Furthermore, since the
degree of regulating the traffic stream from M i is higher for smaller values of fi i (i.e. the regulated
traffic is less bursty and thus has less impact to others), it is desirable to select a vector ~ fi having
a small norm, m( ~ fi). Let ~ fi
be the minimum in AM in terms of its norm. That is, ~ fi
satisfies:
~ fi2AM
We define a ~ fi-selection algorithm to be optimal if it always produces the ~ fi
whenever AM is
nonempty.
We now prove that ~ fi
is unique if it exists for a set of connections M. The following Lemma
is introduced to illustrate the relationships between ~ fi and d net
LEMMA 3.1 Consider a connection set M. d net
does not decrease as ~ fi increases.
Lemma 3.1 is valid by intuition. As ~ fi increases, more cells of those connections whose fi values are
increased will get through the leaky buckets and be injected into the network. This will add more
cells to the following FCFS servers, and they will compete network resources with the cells of M i .
Therefore, d net
fi), the total delays of an M i 's cell at all the FCFS servers, will increase, or remain
the same if the servers still have the capacity to transmit the increased amount of cells without
buffering them. So d net
never decrease as ~ fi increases. The formal proof of Lemma 3.1 is
given in [17].
The next theorem proves the uniqueness of ~ fi
THEOREM 3.1 For a given system, ~ fi
, which satisfies
~ fi2AM
is unique if AM 6= ;.
Proof: We prove the theorem by contradiction. Assume that AM 6= ; and that there are two
distinct vectors ~ fi=! fi 0
N ? which satisfy (15).
Thus,
~ fi2AM
We can construct a new vector ~ fi =! fi 1
fisuch that for
Using (17) in (10), we have
Since a leaky bucket regulator is at the entrance of each connection, d lb
is independent from
any fi j where j 6= i. Thus we have:
d lb
Then, because of (17) we have, for 1 i N
d lb
d lb
d lb
Also, because ~ fi ~ fiand ~ fi ~ fi, from Lemma 3.1, we have,
d net
Further, because ~ fiand ~ fiare distinct, for a given i, 1 i N , we have two cases:
Case 1:
That is,
. Hence, from (20) we get
d lb
and from (21), we get
d net
Therefore,
d lb
But, ~ fisatisfies (15). Therefore,
d lb
Case 2:
Hence, from (20) we get
d lb
and from (21), we get
d net
Therefore,
d lb
But, ~ fisatisfies (15). Therefore,
d lb
Because of (25) and (29), ~ fi is also a feasible assignment for which the connection set is
admissible. However, because of (18) and the definition of ~ fi
, ~ fiand ~ ficannot be ~ fi
. This is a
contradiction. Hence, the theorem holds. 2
4 Algorithm development
In this section, we develop an optimal and efficient algorithm. We first formulate the problem as a
search problem and investigate some useful properties of the search space. These properties help
us in the development of the algorithm.
4.1 Parameter selection as a search problem
By definition, an optimal algorithm takes M as its input and returns ~ fi
whenever AM is nonempty
and returns ! 0; output when AM is empty. We can view the problem of selecting
as a search problem, where the search space consists of all the ~ fi vectors.
Let ~
AM be the set of all ~ fi vectors. AM , the set consisting of ~ fi vectors with which the
deadlines of M are met, is a subset of ~
AM . Let \Gamma! be a relation on ~
AM defined as follows. Given
AM , ~ fi \Gamma! ~ fiif and only if 9j; 1 j N , such that
Note that m( ~ fidiffers from ~ fi only in the jth component. Let 4( ~ fi; ~ fi)
denote the index j. For example, if ~
The relation \Gamma! allows us to define an acyclic directed graph G over ~
AM , with a node set V
and an edge set E given by
fig.
Thus, G is a graph representation of ~
AM , the search space. Graph G can also be considered as a
rooted leveled graph; vector ! is the root and level p consists of all ~ fi vectors having
norm p (p N ). Figure 5 illustrates such a graph when
In graph G, let L p be the set of all ~ fi vectors at level p. Note that a node at level p can have
edges only to nodes at level p + 1. Based on this representation of the search space, a simple
breadth-first search method can be constructed to find ~ fi
.
Figure
6 shows the pseudo-code for
such a method. As shown by the dotted search path in Figure 5, this search method first examines
all ~ fi vectors in L p before proceeding to those in L p+1 . For each ~ fi vector considered, the method
uses the procedure 2 Compute ~ d ( ~ fi) to evaluate ~ d ( ~ fi). The first ~ fi encountered in the search path
that satisfies the deadline constraint ~ d ( ~ fi) ~
D is clearly the ~ fi
vector.
At this point, the reader may raise the following questions about the method shown in Figure 6:
2 For the procedure to compute the worst case end-to-end delay of connections, please refer Appendix A.3.
2,1,3
4,1,1
Figure
5: Example search space graph.
Select Burst Parameters(M)
01. for to 1 do fMain iterative loopg
02. foreach ~ fi 2 L p do
03. ~
04. if ( ~ d ~
05. return( ~ fi);
06. endif
07. endforeach
08. endfor fEnd of main loopg.
Figure
Pseudocode of the breadth-first search method
1. For a given connection set M, set AM may be empty. In such a case, ~ fi
is not defined and
the algorithm of searching ~ fi
will not terminate.
2. Even if AM is nonempty, the exhaustive nature of the breadth-first search results in exponential
time complexity.
In the next subsection, we overcome the first difficulty by bounding the search space. In the
subsequent subsection, we further reduce the search complexity by pruning the search space and
adopting a search method that is more efficient than the breadth-first search.
4.2 Bounding the search space
In the experiment described in Section 3, an interesting observation was that when fi 1
was increased
beyond 1400, there was no change in the worst case end-to-end delays of any of the connections.
The following theorem asserts that such behavior is to be expected in any ATM network and leaky
bucket regulators. Let fi max
i be the minimum value of fi i for which d lb
i is zero.
THEOREM 4.1 For a connection M i whose traffic is described by (C in server
regulated by a leaky bucket with parameters (fi
i is bounded by C in server
i;lb .
Proof: Let Q i;lb be the maximal queue length at the leaky bucket of M i . From Lemma A.2 in
appendix A.1, we have
i is defined to be the smallest integer value for which the maximum waiting time of
a cell at the leaky bucket queue becomes zero, i.e., Q i;lb becomes zero, solve for the value of fi i
which makes (31) zero. We have
4.2 For any connection M i , increasing the value of fi i beyond fi max
i has no impact
on the worst case end-to-end cell delay of M i or any other connection.
Proof: For a connection M i , since fi max
i is the minimal value of fi i for which d lb
i is zero, no
cell of M i is queued at its leaky bucket when fi i reaches fi max
. That is, any cell that arrives at
the leaky bucket will get through the leaky bucket without being buffered. There is virtually no
traffic regulation on M i 's traffic in this case. Therefore, increasing the value of fi i beyond fi max
has no impact on the worst case end-to-end cell delay of M i or any other connection. 2
An important consequence of Theorem 4.2 is that ~
AM ; the search space of candidate ~ fi vectors,
can be bounded; we only need to consider ~ fi vectors that satisfy
~ fi ~ fi
Select Burst Parameters(M)
01. Compute( ~ fi
02. for to m( ~ fi
do fMain iterative loopg
03. foreach ~ fi 2 L p and ~ fi ~ fi max
do
04. ~
05. if ( ~ d ~
06. return( ~ fi);
07. endif
08. endforeach
09. endfor fEnd of main loopg
10. return (! 0;
Figure
7: Pseudocode of the bounded breadth-first search algorithm
Note that ~ fi
can be precomputed for a given set M.
Consider the example in Figure 5. If we assume that ~ fi
applying
Theorem 4.2 we get another graph shown in Figure 8. The shaded region in Figure 8 is
automatically eliminated from consideration.
Using Theorem 4.2, we modify the breadth-first search procedure shown in Figure 6 to take
into account the bounded search space. The resulting pseudo-code is shown in Figure 7. However,
since the size of L p increases exponentially, the complexity of the algorithm is still exponential.
In the next subsection we will prune the search space to design an efficient algorithm.
4.3 Search space pruning
The breadth-first search algorithm defined in Figure 7 has an exponential time complexity. Now
we consider an alternative to the exhaustive breadth-first search method.
As an alternative to the breadth-first search path, we desire a search path that begins at the
root node ! and follows the directed edges in graph G to locate ~ fi
(if ~ fi
exists).
Such a search path would allow us to examine only one vector ~ fi at each level.
Consider the example in Figure 9 which depicts a case with 3. The shaded region
has already been eliminated from consideration based on ~ fi
being Assume that
We would like our search to be guided along one of the two paths:
2.
To guide the search along the direct path from the root to ~ fi
in G, we must choose an
appropriate child node at each level. For example, in Figure 9, while at node !
2,1,3
4,1,1
Figure
8: Example of bounded search space
2,1,3
4,1,1
Figure
9: Example graph of search space after pruning.
either choose ! our next candidate node. However, if we are at node
?, we must guide our search to choose ! as the next
candidate node.
In order to select candidate nodes at each level, we need to know whether a particular node is
an ancestor of ~ fi
(if ~ fi
indeed exists). A node ~ fi is said to be an ancestor of node ~ fi(denoted
~ fi *
\Gamma! ~ fi) if ~ fican be reached from ~ fi (by a directed path in G). For example, in Figure 9, the
ancestor nodes of ! To formally
define the ancestor relationship, we proceed as follows.
First, each vector in ~
AM is considered an ancestor of itself. Let ~ fi p
and ~ fi p+k
be two vectors
in ~
AM such that m( ~ fi
p+k
\Gamma! ~ fi
p+k
if
9 ~ fi
AM such that ~ fi
\Gamma! ~ fi
\Gamma! ~ fi
\Gamma! ~ fi
p+k
The next theorem states an important result that will help us construct a directed path from
the root node (i.e., ! to ~ fi
for any given M. We need some notations first.
be the status vector associated with a node ~ fi, where
Vector ~s ( ~ fi) indicates the status of each stream (i.e., whether the deadlines of individual streams
are met or not) when ~ fi is selected as the burst parameter vector.
The following Lemmas are introduced to help the proof of the theorem which defines the direct
search of ~ fi
LEMMA 4.1 Consider a connection set M. d j ( ~ fi) does not decrease as the increase of fi i s for
any i where i 6= j.
Lemma 4.1 is valid. As we have before, d j ( ~ const
const
j is a constant and
d lb
is independant from fi i if j 6= i. Only d net
changes as the increase of fi i s for any i where
we have that d net
does not decrease as the increase of ~ fi. Therefore,
Lemma 4.1 holds. The formal proof of this Lemma can be found in [17].
LEMMA 4.2 Consider a connection set M and assume that ~ fi
exists for M. If ~ fi is an ancestor
of ~ fi
, for any y (1 y N) which makes s y ( ~
y .
Proof: Since ~ fi is an ancestor of ~ fi
, we have ~
. That is,
Now we prove: for any y (1 y N ), if s y ( ~
y . Because of (35), we only need
to prove fi y 6= fi
y . We prove it by contradiction.
Assume
y . Since s y ( ~ we have d y ( ~ fi) ? D y . From Lemma 4.1, we have that
d y ( ~ fi) does not decrease no matter how much we increase fi i s if i 6= y. We now increment the
corresponding components of ~ fi to make ~ fi equal to ~ fi
. d y ( ~ fi
holds. This contradicts
to the definition of ~ fi
and the existance assumption of ~ fi
. Therefore fi y 6= fi
y . So fi y ! fi
y . 2
THEOREM 4.3 If ~ fi
exists, then the following are true:
is an ancestor of ~ fi
, and
2. if ~ fi is an ancestor of ~ fi
, then there exists a ~ fisuch that
(a) ~ fi \Gamma! ~ fi,
(b) s
(c) either ~ fiis ~ fi
, or an ancestor of ~ fi
Proof: Statement 1 is true since ! is an ancestor of all ~ fi's.
Let ~ fi, ~ fi 6= ~ fi
; be an ancestor of ~ fi
. Therefore, for all x, 1 x N we have
Since ~ fi
is unique, there exists y, 1 y N such that s y ( ~
Then, from Lemma 4.2, we have
Now construct ~ fisuch that for all z, z 6= y fi 0
From the definition of "\Gamma!", we have obviously
Thus a) holds.
From the definition of "4", we have 4( ~ fi; ~ y. Also, because s y ( ~ that is
s
Thus b) holds.
By (36) and (37) for all x, 1 x N we have
x fi
Thus
Hence, either ~ fi= ~ fi
, or there exists a directed path in G such that
\Gamma! ~ fi
fiis an ancestor of ~ fi
. Thus c) holds. 2
Now consider the claims made by the above theorem. The first claim in Theorem 4.3 is trivial.
It states that if ~ fi
exists, then there must be at least one path from the root node !
to ~ fi
. The second claim in Theorem 4.3 implies that if ~ fi is an ancestor of ~ fi
(i.e., ~ fi
exists) and
the assignment of ~ fi does not make M admissible (i.e., ~ d ( ~ fi) ? ~
assignment ~ fiderived from ~ fi such that ~ fi \Gamma! ~ fiand s
0, is also an ancestor of ~ fi
The first claim helps us to begin the search from the root node. Once we are at level p
examining a node ~ fi 2 L p , the second claim helps us to choose the child node of ~ fi if ~ d ( ~ fi) ? ~
D .
The theorem states that we can choose a child node ~ fiof ~ fi such that s
0. The theorem
ensures that such a child node must also have a directed path to ~ fi
if ~ fi
exists. Hence, ~ fi
can
be found by our search starting from ! using the status vector ~s to guide the
search along a directed path leading to ~ fi
4.4 The efficient algorithm and its properties
In this subsection, we first present an efficient and optimal algorithm, and then prove its properties.
Figure
shows the pseudo-code of the algorithm. The algorithm is derived from the one
in
Figure
7 by pruning the search space. The algorithm is an iterative procedure, starting from
the root, i.e., ! During each iteration the algorithm selects a node from the next
level. The node is selected (line with the help of status vector ~s (computed by function
Compute
D ) in line 8). This iterative process continues until either ~ fi
is found or for some
The following two theorems assert the correctness property and the complexity of the algorithm
from
Figure
10.
THEOREM 4.4 For a connection set M, the algorithm in Figure 10 is optimal.
Proof of this theorem follows from Theorem 4.3 and the pseudo-code of the algorithm.
THEOREM 4.5 The time complexity of the algorithm in Figure 10 is O(N
Proof: In the algorithm shown in Figure 10, the maximum number of iterations is
N+1. During each iteration the algorithm calls three procedures (lines 4, 8, and 9). The worst case
time complexity of the procedure Compute ~ d ( ~ fi) (line 4) is a function of the network size, i.e., the
3 Note that line 9 in the algorithm can be modified to select the connection whose deadline is missed and for
which has the minimum value. This modification improves the average case time complexity of the
algorithm without changing the worst case one.
Select Burst Parameters(M)
01. Compute( ~ fi
02. ~
03. for to m( ~ fi
do fMain iterative loopg
04. ~
05. if ( ~ d ~
06. return( ~ fi);
07. else
08. ~s ( ~
Find index j(); fSuch that s
10. if (fi
11. return(! 0;
12. else
13. fIncreasing component j by 1g
fBy Theorem 4.3, the new burst vector is also an ancestor of ~ fi
14. endif
15. endif
16. endfor fEnd of main loopg.
Figure
10: Pseudocode of the efficient algorithm
From
HostFrom
HostFrom
HostFrom
HostFrom
HostFrom
HostFrom
HostFrom
HostTo
HostTo
HostTo
HostFrom
HostSwitch
Switch
connections source desitination
Host 00 Host 100
Host 01 Host 101
Host 10j
Host 99 Host 109
Figure
11: Example network used in simulation
longest path in the network. Hence, for a given network the time complexity of Compute
can be bounded by a constant. The procedure Compute
steps of
comparisons, therefore its time complexity is O(N ). Finally, the worst case time complexity of
the procedure Find index j() (line
Hence, the time complexity of the algorithm of Figure 10 is O(N
5 Performance evaluation
In this section, we present performance results to evaluate the impact of leaky bucket regulation
on HRT systems.
We consider the sample network architecture shown in Figure 11. It consists of two stages,
with a total of 11 ATM switches. Each ATM switch has 10 input lines and 10 output lines. The
connections in the network form a symmetric pattern. There are 100 connections in the system and
each connection goes through two switches. The connections are arranged in such a way that
connections share one output link at each stage. At the first stage, connections M
are multiplexed in Switch 0 and are transmitted over link 0. At the second stage, connections
are multiplexed in Switch 10 and are transmitted over a link to Host 100.
We evaluate the performance of the system in terms of the admission probability, AP (U ), which
is defined as the probability that a set of randomly chosen HRT connections can be admitted, given
the traffic load in terms of the average utilization of the links U .
To obtain the performance data, we developed a program to simulate the above ATM network
and the connections. The program is written in the C programming language and runs in a
Sun/Solaris environment. In each run of the program 200 connection sets are randomly generated.
For each connection, the total number of cells per period is chosen from a geometric distribution
with mean 10. The worst case cell arrival rate (C, P ) of the connections sharing a particular link
at the first stage are chosen as random variables uniformly distributed between 0 and U subject to
their summation being U , the average utilization of the link. Similar results have been obtained
with different settings of parameters.
For each connection set generated, the following systems are simulated:
ffl System A. In this system connection traffic is unregulated, i.e., the burst vector selected for
the connection set is ~ fi max
ffl System B. In this system constant burst vectors are used for all the connections. In particular,
System B1 sets the burst vector to be ! 3; sets the burst vector to
be
ffl System C. In this system the burst vector produced by our optimal algorithm is used.
Figures
13 and 14 show the performance figures corresponding to the cases where D i is set to
be 2P i and 1:5P i respectively. It is common that deadlines are associated with periods [14, 15] in
HRT systems.
From these figures, we can make the following observations:
System C, where our optimal algorithm is used to set the burst vectors, performs the best.
The performance gain is particularly significant when the link utilization becomes high, in
comparison with systems A, B1, and B2. For example, in Figure 13, at
is close to 1 for System C, but 0 for systems A and B2. This justifies our early claim that
the burst vector must be properly set in order to achieve the best system performance with
HRT applications.
ffl In general, the admission probability is sensitive to the average link utilization. As the
utilization increases the admission probability decreases. This is expected because higher
the network utilization, the more difficult it is for the system to admit a set of connections.
We can also find the decreasing speed of the admission probability in systems A, B1 and B2
is much faster than in System C, as the increase of the utilization. It suggests that in the
situations where the link utilization is high, the proper selection of burst vector becomes
more important to the system performance.
ffl Comparing the performance of systems A, B1, B2 and System C, we can clearly see the
difference between no-regulation, regulation with a large ~ fi, regulation with a small ~ fi and
regulation with an optimal ~ fi. System A, which does not have any traffic regulation, performs
the worst among all the systems. System B1 with a small ~ fi performs better than system
B2 which has a larger ~ fi. System C, with the optimal ~ fi, performs the best. It is another
evidence which proves of the correct direction of our work: choosing the minimal ~ fi which
can still meet the end-to-end deadlines. The simulation results further strengthen the need
for a good ~ fi selection algorithm.
ffl The performance of all systems is very stable as the change of D i . The curves in Figure 13
all have similar shapes as those in Figure 14. It demonstrates that the simulation results
are stable. They have not been affected by some system dynamic factors, such as system
loading, other system traffics, and so on.
6 Conclusions
In this paper we addressed the issue of guaranteeing end-to-end deadlines of HRT connections in
an ATM network. Much of the previous work has concentrated on scheduling policies used in ATM
switches. Our approach to this problem is to regulate the input traffic at the network interface. In
particular, we consider leaky bucket traffic regulators. This study is the first one that uses traffic
regulation (in particular with leaky buckets) as a method of guaranteeing the end-to-end deadlines
of HRT connections. Traditionally, a leaky bucket has been used as a policing mechanism when
the source traffic at the input of the network does not conform to its negotiated characteristics.
We have designed and analyzed an efficient and optimal algorithm for selecting the burst
parameters of leaky buckets in order to meet connections' deadlines. Our algorithm is optimal
in the sense that if there exists an assignment of burst parameters for which the deadlines of a
set of HRT connections can be met, then the algorithm will always find such an assignment. Our
algorithm is also efficient. We simulated and compared the performance of ATM networks with
different regulation policies. We observed that there is a dramatic improvement in the system
performance when the burst parameters were selected by our algorithm.
Our solution for guaranteeing end-to-end deadlines in HRT ATM networks is effective and
generic. It is independent of the switch architecture and the scheduling policy used at the ATM
switches. It can be used for admission control in any ATM network that uses leaky bucket traffic
regulators.
--R
Performance analysis of a rate-control throttle where tokens and jobs queue
Supporting real-time applications in an integrated services packet network: Architecture and mechanism
A calculus for network delay.
Asynchronous Transfer Mode: Solution for Broadband ISDN.
Analysis and simulation of a fair queueing algorithm.
A scheme for real-time channel establishment in wide-area networks
Contributions toward real-time services on packet switched networks
A framing strategy for congestion management.
Rate controlled servers for very high-speed networks
Scheduling real-time traffic in atm networks
Scheduling algorithms for multiprogramming in a hard-real-time environment
Hard real-time communication in multiple-access networks
A Generalized Processor Sharing Approach to Flow Control in Integrated Services Networks.
Real Time Communication in ATM Networks.
Guaranteeing end-to-end deadlines in ATM networks
Admission control for hard real-time connections in ATM LAN's. In Proceedings of the IEEE Infocom'96
Responsive aperiodic services in high-speed networks
Congestion control for multimedia services.
ATM concepts
Comparison of rate-based service disciplines
Virtual clock: A new traffic control algorithm for packet switching networks.
--TR
--CTR
Khalil Shihab, Modeling and performance evaluation of ATM switches, Proceedings of the 5th WSEAS International Conference on Applied Informatics and Communications, p.377-383, September 15-17, 2005, Malta
L. Lo Bello , A. Gangemi, A slot swapping protocol for time-critical internetworking, Journal of Systems Architecture: the EUROMICRO Journal, v.51 n.9, p.526-541, September 2005 | network delay analysis;ATM network;hard real-time communication;traffic regulation |
318943 | Efficient points-to analysis for whole-program analysis. | To function on programs written in languages such as C that make extensive use of pointers, automated software engineering tools require safe alias information. Existing alias-analysis techniques that are sufficiently efficient for analysis on large software systems may provide alias information that is too imprecise for tools that use it: the imprecision of the alias information may (1) reduce the precision of the information provided by the tools and (2) increase the cost of the tools. This paper presents a flow-insensitive, context-sensitive points-to analysis algorithm that computes alias information that is almost as precise as that computed by Andersen's algorithm the most precise flow- and context-insensitive algorithm and almost as efficient as Steensgaard's algorithm the most efficient flow- and context-insensitive algorithm. Our empirical studies show that our algorithm scales to large programs better than Andersen's algorithm and show that flow-insensitive alias analysis algorithms, such as our algorithm and Andersen's algorithm, can compute alias information that is close in precision to that computed by the more expensive flow- and context-sensitive alias analysis algorithms. | Introduction
Many automated tools have been proposed for use in software engineering. To
function on programs written in languages such as C that make extensive use of
pointers, these tools require alias information that determines the sets of memory
locations accessed by dereferences of pointer variables. Atkinson and Griswold
discuss issues that must be considered when integrating alias information
into whole-program analysis tools. They argue that, to effectively apply the
tools to large programs, the alias-analysis algorithms must be fast. Thus, they
propose an approach that uses Steensgaard's algorithm [16], a flow- and context-insensitive
alias-analysis algorithm 1 that runs in near-linear time, to provide
flow-sensitive algorithm considers the order of statements in a program; a flow-insensitive
algorithm does not. A context-sensitive algorithm considers the legal
call/return sequences of procedures in a program; a context-insensitive algorithm
does not.
alias information for such tools. However, experiments show that, in many cases,
Steensgaard's algorithm computes very imprecise alias information [13, 18]. This
imprecision can adversely impact the performance of whole-program analysis.
Whole-program analysis can be affected by imprecise alias information in
two ways. First, imprecise alias information can decrease the precision of the
information provided by the whole-program analysis. Our preliminary experiments
show that the sizes of slices computed using alias information provided by
Steensgaard's algorithm can be almost ten percent larger than the sizes of slices
computed using more precise alias information provided by Landi and Ryder's
algorithm [11], a flow-sensitive, context-sensitive alias-analysis algorithm. Sec-
ond, imprecise alias information can greatly increase the cost of whole-program
analysis. Our empirical studies show that it can take a slicer five times longer
to compute a slice using alias information provided by Steensgaard's algorithm
than to compute the slice using alias information provided by Landi and Ryder's
algorithm; similar results are reported in [13]. These results indicate that the extra
time required to perform whole-program analysis with the less precise alias
information might exceed the time saved in alias analysis with Steensgaard's
algorithm.
One way to improve the efficiency of whole-program analysis tools is to use
more precise alias information. The most precise alias information is provided
by flow-sensitive, context-sensitive algorithms (e.g., [5, 11, 17]). The potentially
large number of iterations required by these algorithms, however, makes them
costly in both time and space. Thus, they are too expensive to be applicable
to large programs. Andersen's algorithm [1], another flow-insensitive, context-insensitive
alias-analysis algorithm, provides more precise alias information than
Steensgaard's algorithm with less cost than flow-sensitive, context-sensitive al-
gorithms. This algorithm, however, may require iteration among pointer-related
assignments 2 (O(n 3 is the program size), and requires that the
entire program be in memory during analysis. Thus, this algorithm may still be
too expensive in time and space to be applicable to large programs.
Our approach to providing alias information that is sufficiently precise for
use in whole-program analysis, while maintaining efficiency, is to incorporate
calling-context into a flow-insensitive alias-analysis algorithm to compute, for
each procedure, the alias information that holds at all statements in that proce-
dure. Our algorithm has three phases. In the first phase, the algorithm uses an
approach similar to Steensgaard's, to process pointer-related assignments and
to compute alias information for each procedure in a program. In the second
phase, the algorithm uses a bottom-up approach to propagate alias information
from the called procedures (callees) to the calling procedures (callers). Finally,
in the third phase, the algorithm uses a top-down approach to propagate alias
information from callers to callees. 3
pointer-related assignment is a statement that can change the value of a pointer
variable.
3 Future work includes extending our algorithm to handle function pointers using an
approach similar to that discussed in Reference [2].
This paper presents our alias-analysis algorithm. The main benefit of our
algorithm is that it efficiently computes an alias solution with high precision.
Like Steensgaard's algorithm, our algorithm efficiently provides safe alias information
by processing each pointer-related assignment only once. However, our
algorithm computes a separate points-to graph for each procedure. Because a
single procedure typically contains only a few pointer-related variables and as-
signments, our algorithm computes alias sets that are much smaller than those
computed by Steensgaard's algorithm, and provides alias information that is almost
as precise as that computed by Andersen's algorithm. Another benefit of
our algorithm is that it is modular. Because procedures in a strongly-connected
component of the call graph are in memory only thrice - once for each phase
our algorithm is more suitable than Andersen's for analyzing large programs.
This paper also presents a set of empirical studies in which we investigate (a)
the efficiency and precision of three flow-insensitive algorithms - our algorithm,
Steensgaard's algorithm, Andersen's algorithm - and Landi and Ryder's flow-sensitive
algorithm [11], and (b) the impact of the alias information provided by
these four algorithms on whole-program analysis. These studies show a number
of interesting results:
- For the programs we studied, our algorithm and Andersen's algorithm can
compute a solution that is close in precision to that computed by a flow- and
context-sensitive algorithm.
- For programs where Andersen's algorithm requires a large amount of time,
our algorithm can compute the alias information in time close to Steens-
gaard's algorithm; thus, it may scale up to large programs better than An-
dersen's algorithm.
- The alias information provided by our algorithm, Andersen's algorithm, and
Landi and Ryder's algorithm can greatly reduce the cost of constructing
system-dependence graphs and of performing data-flow based slicing.
Our algorithm is almost as effective as Andersen's algorithm and Landi and
Ryder's algorithm in improving the performance of constructing system-
dependence graphs and of performing data-flow based slicing.
These results indicate that our algorithm can provide sufficiently precise alias
information for whole-program analysis in an efficient way. Thus, it may be the
most effective algorithm, among the four, for supporting whole-program analysis
on large programs.
Flow-Insensitive and Context-Insensitive Alias-Analysis
Algorithms
Flow-insensitive, context-insensitive alias-analysis algorithms compute alias information
that holds at every program point. These algorithms process pointer-
related assignments in a program in an arbitrary order and replace a call statement
with a set of assignments that represent the bindings of actual parameters
and formal parameters. The algorithms compute safe alias information (points-
to relations): for any pointer-related assignment, the set of locations pointed
{
int *incr_ptr(int *ptr) {
return ptr+1;
22 }
incr_ptr buf2 p input ptr buf1 q r
incr_ptr buf2 p input ptr buf1 q r
incr_ptr buf2 p input ptr buf1 q r
{
3 int input[10];
(a)
input[]
input[]
input[], h_17, h_18
incr_ptr ptr input buf2 buf1 q r
incr_ptr ptr input buf2 buf1 q r
incr_ptr ptr input buf2 buf1 q r
(b)
input[]
input[]
(c)
Fig. 1. Example program (a), points-to graph using Steensgaard's algorithm (b),
points-to graph using Andersen's algorithm (c).
to by the left-hand side is a superset of the set of locations pointed to by the
right-hand side.
We can view both Steensgaard's algorithm and Andersen's algorithm as
building points-to graphs [14]. 4 Vertices in a points-to graph represent equivalence
classes of memory locations (i.e., variables and heap-allocated objects),
and edges represent points-to relations among the locations.
Steensgaard's algorithm forces all locations pointed to by a pointer to be in
the same equivalence class, and, when it processes a pointer-related assignment,
it forces the left-hand and right-hand sides of the assignment to point to the same
equivalence class. Using this method, when new pointer-related assignments are
processed, the points-to graph remains safe at a previously-processed pointer-
related assignment. This method lets Steensgaard's algorithm safely estimate
the alias information by processing each pointer-related assignment only once.
Figure
1(b) shows various stages in the construction of the points-to graph
for the example program of Figure 1(a) using Steensgaard's algorithm. The top
graph (labeled (b.1))) shows the points-to graph in its initial stage, where all
pointers, except input, point to empty equivalence classes. When Steensgaard's
4 A points to graph is similar to an alias graph [3].
algorithm processes statement 6, it merges the equivalence class pointed to by
input with the equivalence class pointed to by p; the merged equivalence class
is illustrated by the dotted box. Steensgaard's algorithm processes statement
7 similarly; the merged equivalence class is illustrated by the dashed box. The
algorithm processes statements 10, 11, and 14 by simulating the bindings of
parameters and return values with the assignments shown in the solid boxes
in
Figure
1. The middle graph (labeled (b.2)) shows the points-to graph after
Steensgaard's algorithm has processed main().
To represent the objects returned by malloc(), when Steensgaard's algorithm
processes statements 17 and 18, it uses h hstatement numberi. The bottom
graph (labeled (b.3)) shows the points-to graph after Steensgaard's algorithm
processes the entire program. This graph illustrates that Steensgaard's
algorithm can introduce many spurious points-to relations.
Andersen's algorithm uses a vertex to represent one memory location. This
algorithm processes a pointer-related assignment by adding edges to force the
left-hand side to point to the locations in the points-to set of the right-hand
side. For example, when the algorithm processes statement 6, it adds an edge
to force p to point to input[]. Adding edges in this way, however, may cause
the alias information at a previously-processed pointer-related assignment S to
be unsafe - that is, the points-to set of S's left-hand side is not a superset of
the points-to set of S's right-hand side. To provide a safe solution, Andersen's
algorithm iterates over previously processed pointer-related assignments until
the points-to graph provides a safe alias solution.
Figure
1(c) shows various stages in the construction of the points-to graph
using Andersen's algorithm for the example program. The top graph (labeled
(c.1)) shows the points-to graph constructed by Andersen's algorithm after it
processes main(). When the algorithm processes statements 10, 11, and 14,
it simulates the bindings of the parameters using the assignments shown in
the solid boxes. The middle graph (labeled (c.2) ) shows the points-to graph
after Andersen's algorithm processes statement 17. The algorithm forces h 17
to point to buf1, which causes the alias information to be unsafe at statement
7. To provide a safe solution, Andersen's algorithm processes statement 7 again,
which subsequently requires statements 11 and 14 to be reprocessed. The bottom
graph (labeled (c.3)) shows the complete solution. This graph illustrates that
Andersen's algorithm can compute smaller points-to sets than Steensgaard's
algorithm for some pointer variables. However, Andersen's algorithm requires
more steps than Steensgaard's algorithm.
3 A Flow-Insensitive, Context-Sensitive Points-To
Analysis Algorithm
Our flow-insensitive, context-sensitive points-to analysis algorithm (FICS) computes
separate alias information for each procedure in a program. In this section,
we first present some definitions that we use to discuss our algorithm. We next
give an overview of the algorithm and then discuss the details of the algorithm.
input buf2 r
buf1
input buf2 r
buf1
input buf2 r
buf1
incr_ptr()
ptr incr_ptr
buf1 buf2
global
global
buf1 buf2
ptr incr_ptr
ptr incr_ptr
After Phase 1
After Phase 2
After Phase 3
input[]
input[]
buf1
buf1 buf2
buf1 buf2
incr_ptr()
incr_ptr()
buf2
Fig. 2. Points-to graphs constructed by FICS algorithm.
3.1 Definitions
We refer to a memory location in a program by an object name [11], which
consists of a variable and a possibly empty sequence of dereferences and field
accesses. We say that an object name N 1 is extended from another object name
can be constructed by applying a possibly empty sequence of dereferences
and field accesses ! to N 2 ; in this case, we denote N 1 as E! hN 2 i. If N
is a formal parameter and a is the object name of the actual parameter that is
bound to N at call site c, we define a function A c returns object
name E! hai. If N is a global, A c returns E! hNi.
For example, suppose that p is a pointer that points to a struct with field a
(in the C language). Then E hpi is \Lambdap, E h\Lambdapi is p, and E :a hpi is (\Lambdap):a. For
another example, if p is a formal parameter to function F , and \Lambdaq is an actual
parameter bound to p at call site c to F , then A c ((\Lambdap):a) returns ( q):a.
We extend points-to graphs to represent structure variables. A field access
edge, labeled with a field name, connects a vertex representing a structure to a
vertex representing a field of the structure. A points-to edge, labeled with "*",
represents a points-to relation. In such a points-to graph, labels are unique among
the edges leaving a vertex. Given an object name N , FICS can find an access
path PhN;Gi in a points-to graph G: first, FICS locates or creates vertex n 0 in
G to which N 's variable corresponds; then, FICS locates or creates a sequence of
vertices is a
path in G and labels of the edges in p match the sequence of dereferences and field
accesses in N . We refer to n k , the end vertex of PhN;Gi, as the associated vertex
of N in G; and denote n k as VhN; Gi. Note that the set of memory locations
associated with VhN; Gi is the set of memory locations that are aliased to N .
3.2
Overview
FICS computes separate alias information for each procedure using points-to
graphs. FICS first computes a points-to graph GP for a procedure P by processing
each pointer-related assignment in P using an approach similar to Steens-
gaard's algorithm. If none of the pointer variables that appears in P is a global
variable or a formal parameter, and none of the pointer variables is used as an
actual parameter, then GP safely estimates the alias information for P . How-
ever, if some pointer variables that appear in P are global variables or formal
parameters, or if some pointer variables are used as actual parameters, then the
pointer-related assignments in other procedures can also introduce aliases related
to these variables; GP must be further processed to capture these aliases.
There are three cases in which pointer-related assignments in other procedures
can introduce aliases related to a pointer variable that appear in P . In
the first case, a pointer-related assignment in another procedure forces E! hgi,
where g is a global variable that appears in P , to be aliased to a memory lo-
cation. Because FICS does not consider the order of the statements, it must
assume that such an alias pair holds throughout the program. Thus, FICS must
consider such an alias pair in P . For example, in Figure 1(a), statement 17
forces \Lambdabuf 1 to be aliased to h 17; this alias pair must be propagated to main()
because main() uses buf1. FICS captures this type of alias pair in GP in two
steps: (1) it computes a global points-to graph, G glob , to estimate the memory
locations that are aliased to each possible global object name in the program;
(2) it updates GP using the alias information represented by G glob .
In the second case, an assignment in a procedure called by P forces E!1 hf 1 i
to be aliased to E!2 hf 2 i, where f 1 is a formal parameter and f 2 is either a
formal parameter or a global variable (the return value of a function is viewed
as a formal parameter). Alias pair (E !1 hf 1 i,E !2 hf 2 i) can be propagated from the
called procedure to P and can force A c (E !1 hf 1 i) to be aliased to A c (E !2 hf 2 i)
at call site c. For example, in Figure 1(a), statement 21 in function incr ptr()
forces \Lambdaincr ptr to be aliased to \Lambdaptr. When this alias pair is propagated back
to main(), it forces \Lambdar to be aliased to \Lambdaq . FICS maps the alias pairs related to
the formal parameters to the alias pairs related to the actual parameters and
updates GP with the alias pairs of the actual parameters.
In the third case, an assignment in a procedure that calls P forces a location
l to be aliased to E! hai, where a is an actual parameter bound to f at a call site
c to P . Alias pair propagated into P and forces E! hfi to be aliased
to l. For example, statement 6 forces (\Lambdap; input[]) to be an alias pair in main()
of
Figure
propagated into incr ptr() at statement 10, and
forces (\Lambdaptr; input[]) to be an alias pair. FICS propagates this type of alias pairs
from the calling procedure to P and updates GP .
FICS has three phases: Phase 1 processes the pointer-related assignments
in each procedure and initially builds the points-to graph for the procedure;
Phase 2 and Phase 3 handle the three cases discussed above. Phase 2 propagates
alias information from the called procedures to the calling procedures, and also
builds the points-to graph for the global variables using the alias information
available so far for a procedure. Phase 2 processes the procedures in a reverse
topological (bottom-up) order on the strongly-connected components of the call
graph. Within a strongly-connected component, Phase 2 iterates over the procedures
until the points-to graphs for the procedures stabilize. Phase 3 propagates
alias information from the points-to graph for global variables to each proce-
dure. Phase 3 also propagates alias information from the calling procedures to
the called procedures. Phase 3 processes the procedures in a topological (top-
down) order on the strongly-connected components of the call graph. Phase 3
iterates over procedures in a component until the points-to graphs for the procedures
stabilize. Because FICS propagates information from called procedures
to calling procedures (Phase 2) before it propagates information from calling
procedures to called procedures (Phase 3), it will never propagate information
through invalid call/return sequences. Therefore, FICS is context-sensitive.
The bottom graphs in Figure 2 depict the points-to graphs computed by FICS
for the example program of Figure 1. The graphs show that, using FICS, variables
can be divided into equivalence classes differently in the points-to graphs of
different procedures. For example, in incr ptr(), h 17, h 18, and input[] are in
one equivalence class. However, in main(), input[] is in a different equivalence
class than h 17 and h 18. Because FICS creates separate points-to graphs for
main(), init(), and incr ptr(), it computes a more precise alias solution than
Steensgaard's algorithm for the example program. The graphs also show that
FICS computes a smaller points-to set for p and q than Andersen's algorithm
because it considers calling-context. In the solution computed by Andersen's
algorithm, p must point to the locations pointed to by incr ptr under any calling-
context; in the solution computed by FICS, p points only to the locations pointed
to by incr ptr when incr ptr() is invoked at statement 10. Under such a calling
context, incr ptr points only to input[].
3.3 Algorithm Description
Figure
3 shows FICS, which inputs P , the program to be analyzed, and outputs
L, a list of points-to graphs, one for each procedure and one for the global
variables.
Phase 1: Create Points-To Graphs for Individual Procedures. In the
first phase (lines 1-7), FICS processes the pointer-related assignments in each
procedure P i in P to compute the points-to graph GP i . FICS first finds or creates
GP i i for each pointer-related assignment
rhs. Then, the algorithm uses Merge(), a variant of the "join" operation
in Steensgaard's algorithm, to merge v 1 and v 2 into one vertex. Merge() also
merges the successors of v 1 and v 2 properly so that the labels are unique among
the edges leaving the new vertex. In this phase, FICS ignores all call sites except
those call sites to memory-allocation functions; for such call sites, the algorithm
uses h hstatement numberi to represent the objects returned by these functions.
Finally, FICS adds P i to W 1 and to W 2 , and adds GP i
to L.
algorithm FICS
input P: program to be analyzed
output L: a list of points-to graphs, one for each procedure, one for global variables
declare
GP i
list of procedures, sorted reverse-topologically on the strongly-connected
components of the call graph
list of procedures, sorted topologically on the strongly-connected components
of the call graph
begin FICS
1. foreach procedure P i in P do /*phase 1 */
2. foreach pointer-related assignment lhs = rhs do
3. find or create v1 for lhs, v2 for rhs in GP i
4. Merge(GP i
5. endfor
to L
7. endfor
8. while W1 6= OE do /*phase 2 */
9. remove procedure from head of W1
10. foreach call site c to P j in P i do
11. Bind(actualsc ,GP i ,
12. endfor
13. BindGlobal(globals(GP i ),G glob ,GP i )
14. BindLoc(globals(GP i
15. if GP i is updated then
foreach P i 's caller Pk do
17. if Pk not in W1 then Add Pk to W1 endif
18. endfor
19. endif
20. endwhile
21. while W2 6= OE do /*phase 3 */
22. remove procedure from head of W2
24. foreach call site c from P i to P j do
26. endfor
27. if GP j is updated then
28. foreach P j 's callee P l do
29. if P l not in W2 then Add P l to W2 endif
30. endfor
31. endif
32. endwhile
Fig. 3. FICS: Flow-Insensitive, Context-Sensitive alias-analysis algorithm.
The points-to graphs on the top of Figure 2 are constructed by FICS, in
the first phase, for main() (left), init() (middle), and incr ptr()(right) of the
example program. Note that the points-to relations introduced by init(), such as
the points-to relation between buf1 and h 17, are not yet represented in main()'s
points-to graph. In the following two phases, FICS gathers alias information from
both callees and callers of P i to further build GP i
Phase 2: Compute Aliases Introduced at Callsites and Create Global
Points-to Graph. In the second phase (lines 8-20), for each procedure P i , FICS
computes the aliases introduced at P i 's call sites. For each call site c to procedure
calls Bind() to find alias pairs of (E !1 hf 1 i,E !2 hf 2 i), where f 1 and
are P j 's formal pointer parameters, using a depth-first search on GP j
. The
search begins at the vertices associated with P j 's formal parameters of pointer
type, looking for possible pairs of PhE!1 hf 1
at the same vertex. This implies that E!1 hf 1 i is aliased to E!2 hf 2 i. Bind() maps
this type of alias pair back to P i and captures the alias pairs in GP i by merging
the end vertices of PhA c (E !1 hf 1 i); GP i i and PhA c (E !2 hf 2 i); GP i i in GP i . For
example, FICS calls Bind() to process the call site at statement 14 in Figure
1. Bind() finds alias pair (\Lambdaptr, \Lambdaincr ptr) in G incr ptr . Then, it substitutes
ptr with q and incr ptr with r, and creates an alias pair (\Lambdaq, \Lambdar), and merges
also searches for PhE!1 hfi; GP j i and PhE!2 hgi; GP j i, where f is a
formal pointer parameter and g is a global variable, that end at the same
vertex. Similarly, Bind() merges the end vertices of PhA c (E !1 hfi); GP i i and
PhE!2 hgi; GP i i in GP i
In this phase, FICS also calls BindGlobal() to compute the global points-to
graph G glob with the alias information of P i . BindGlobal() finds alias pairs
are global variables, using a depth-first search
in GP i
. The search begins at the associated vertices of global variables in GP i
and looks for pairs of access paths PhE!1 hg 1
at one vertex. BindGlobal() then merges the end vertices of PhE!1 hg 1
and PhE!2 hg 2 . For example, when FICS processes main() in this
phase, it calls BindGlobal() to search G main and finds that P h\Lambdabuf
and P h\Lambdabuf 2; G main i end at the same vertex. Thus, FICS merges V h\Lambdabuf
and V h\Lambdabuf 2; G glob i.
FICS also computes the memory locations that are aliased to E! hgi, where g
is a global. If a location l is in the equivalence class represented by VhE! hgi; GP i i,
then is an alias pair. FICS calls BindLoc() to look for VhE! hgi; GP i i
using a depth-first search. For each location l associated with VhE! hgi; GP i i,
merges Vhl; G glob i with VhE! hgi; G glob i to capture the alias pair
. For example, when FICS processes init() in this phase,
it merges Vhh 17; G glob i with V h\Lambdabuf associated with
i. After this phase, G glob is complete.
Phase 3: Compute Aliases Introduced by the Calling Environment. In
the third phase (lines 21-32), FICS computes the sets of locations represented by
the vertices in GP j
and completes the computation of GP j
. FICS first computes
the locations for vertices in GP j
from G glob . Let g be a global variable that
appears in GP j
. FICS calls BindLoc() to look for VhE! hgi; GP j
using a depth-first
search. BindLoc() then copies the memory locations from VhE! hgi; G glob i
to VhE! hgi; GP j i. For example, when FICS processes main() in the example of
Figure
1, it copies h 17 and h
FICS also computes the locations for vertices in GP j from GP i , given that P i
calls P j at a call site C. Suppose a is bound to formal parameter f at C. FICS
calls BindLoc() to copy the locations from VhE! hai; GP i i to VhE! hfi; GP j i to
capture the fact that the aliased locations of E! hai are also aliased to E! hfi. For
example, FICS copies input[] from V h\Lambdap; G main i to V h\Lambdaptr; G incr ptr i because p
is bound to ptr at statement 11. After this phase, the set of memory locations
represented by each vertex is complete.
Complexity of the FICS Algorithm. 5 Theoretically, it is possible to construct
a program P that has O(2 n ) distinguishable locations [15], where n is
the size of P . This makes any alias-analysis algorithm discussed in this paper
exponential in time and space to the size of P . In practice, however, the total
distinguishable locations in P is O(n) and a structure in P typically has a limited
number of fields.
Let p be the number of procedures in P and S be the worst-case actual
size of the points-to graph computed for a procedure. The space complexity
of FICS is O(p S + n). In the absence of recursion, each procedure P is
processed once at each phase. Thus, Bind(), BindGlobal(), and BindLoc()
are invoked O(NumOfCall times. In the presence of recursion, a single
change in GP might require one propagation to each of P 's callers and
one propagation to each of P 's callees. GP changes O(S) times, thus, Bind(),
BindGlobal() and BindLoc() are invoked O((NumOfCall+p) S) times. When
the points-to graph is implemented with fast find/union structure, each invocation
of Bind(), BindGlobal(), and BindLoc() requires O(S) "find" operations
on a fast find/union structure with size O(p S). Let N be NumOfCall
in the absence of recursion and N be (NumOfCall in the presence of
recursion. The time complexity of FICS is O((N S+p S)ff(N
ff is the inverse Ackermann function. In practice, we can expect NumOfCall S
to be O(n). Thus, we can expect to run FICS in time almost linear in the size
of the program in practice.
4 Empirical Studies
To investigate the efficiency and precision of FICS and the impact on whole-program
analysis of alias information of various precision levels, we performed
several studies in which we compared the algorithm with Steensgaard's algorithm
(ST) [16], Andersen's algorithm (AND) [1], and Landi and Ryder's algorithm
(LR) [11]. We used the PROLANGS Analysis Framework (PAF) [6] to
implement, with points-to graphs, FICS, Steensgaard's algorithm, and Ander-
sen's algorithm. We used the implementation of Landi and Ryder's algorithm
provided by PAF. None of these implementations handles function pointers or
setjump-longjump constructs.
The left-hand side of Table 1 gives information about a subset of the subject
programs used in the studies. 6 To allow the algorithms to capture the aliases
introduced by calls to library functions, we created a set of stubs that simulate
the effects of these functions on aliases. However, we did not create stubs for the
functions that would not introduce aliases at calls to these functions because,
in preliminary studies, we observed that using stubs forces Steensgaard's algorithm
to introduce many additional points-to relations. For example, for dixie,
5 Details of the complexity analysis for FICS can be found in [12].T-W-MC and moria are not used in Studies 2 and 3 because the slicer requires more
than 10 hours, the time limit we set for slicing, to collect the data.
Table
1. Subject programs and Time in seconds to compute alias solutions.
Lines of Number of Number of Number of
Program Code CFG Nodes Procedures PRAs ST FICS AND LR
loader 1132 819
ansitape 1596 1087 37 59 0.06 0.16 0.19 0.54
dixie 2100 1357 52 149 0.1 0.22 0.3 0.92
learn 1600 1596 50 129 0.08 0.2 0.35 1.47
smail
simulator 3558 2992 114 83 0.11 0.38 0.34 1.43
flex 6902 3762 93 231 0.14 0.42 0.53 410.28
space 11474 5601 137 732 0.62 1.77 4.64 113.39
bison 7893 6533 134 1170 0.33 0.78 1.27 -
larn 9966 11796 295 642 0.37 1.2 1.2 -
mpeg play 17263 11864 135 1782 0.92 3.18 4.92 -
espresso 12864 15351 306 2706 4.21 10.69 957.16 -
moria 25002 20316 482 785 2.34 3.68 521.82 -
using stubs for the functions that would not introduce aliases at calls, FICS com-
putes, on average, ThruDeref Mod [18] of 29.45, whereas not using such stubs,
it computes, on average, ThruDeref Mod of 22.10 (see Study 1).
Study 1. In study 1, we compare the performance and precision of Steensgaard's
algorithm, FICS, Andersen's algorithm, and Landi and Ryder's algorithm. For
each subject program, we recorded the time required to compute the alias information
(Time) and the average number of locations modified through dereference
The right-hand side of Table 1 shows the running time of the algorithms
on the subject programs. 7 We collected these data by running our system on a
Sun Ultra 1 workstation with 128MB of physical memory and 256 MB virtual
memory. The table shows that, for our subject programs, the flow-insensitive
algorithms run significantly faster than Landi and Ryder's algorithm. The table
also shows that, for small programs, both FICS and Andersen's algorithm have
running time close to Steensgaard's algorithm. However, for the large programs
where Andersen's algorithm takes a large amount of time, FICS still runs in time
close to Steensgaard's algorithm. This result suggests that, for large programs,
FICS is more efficient in time than Andersen's algorithm.
Figure
4 shows the average number of ThruDeref Mod for the four algorithms.
The graph shows that, for many programs, Steensgaard's algorithm computes
very imprecise alias information, which might limit its applicability to other
data-flow analyses. The graph also shows that, for our subject programs, FICS
computes alias solutions of ThruDeref Mod that are close to that computed by
Andersen's algorithm. For smail and espresso, FICS computes smaller ThruD-
Mod than Andersen's algorithm because these two programs have functions
similar to incr ptr() in Figure 1, on which Andersen's algorithm loses precision
because it does not consider calling context. The graph further shows that the
7 Data on Landi and Ryder's algorithm are not available for seven programs because
the analysis required more than 10 hours, the limit we set for the analysis.
6.2 43.3
Fig. 4. ThruDeref Mod for the subject programs.
Table
2. Average number of summary edges (S) per call and average time (T) in seconds
to compute the summary edges for a call in a system dependence graph.
Raw Data % of Steensgaard
program ST FICS AND LR FICS AND LR
loader 465 2.3 195 1.1 195 1.1 199 1.1 41.9 47.4 41.9 47.8 42.8 50.0
ansitape 880 2.6 533 1.7 431 1.2 400 1.2 60.6 66.7 49.0 48.5 45.5 45.8
dixie 821 2.5 314 1.5 227 1.1 206 1.0 38.3 58.4 27.7 42.5 25.2 40.0
learn 1578 7.6 209 1.3 173 1.1 159 1.0 13.3 17.1 11.0 14.1 10.1 12.9
unzip 1979 9.4 738 4.0 687 3.4 402 2.1 37.3 42.9 34.7 36.4 20.3 22.1
smail
simulator 979 2.0 736 1.2 736 1.2 535 1.0 75.1 62.4 75.1 62.6 54.6 50.3
flex 1156 12.1 620 8.0 579 7.5 550 7.4 53.6 66.1 50.1 61.9 47.6 61.3
space 7562 19.4 5639 10.4 5525 10.2 3839 7.5 74.6 53.4 73.1 52.7 50.8 38.5
bison 679 2.6 653 1.6 520 1.1 - 96.2 62.4 76.6 43.4 -
larn 36726 182.9 9582 38.2 8087 30.9 - 26.1 20.9 22.0 16.9 -
mpeg play 1306 32.2 946 23.9 940 21.8 - 72.4 74.2 72.0 67.7 -
solutions computed by FICS and Andersen's algorithm are very close to that
computed by Landi and Ryder's algorithm. This result suggests that, for many
data-flow problems, aliases obtained using FICS or Andersen's algorithm might
provide sufficient precision. Note that, because Landi and Ryder's algorithm uses
a k-limiting technique, which collapses the fields of a structure, to handle recursive
data structures [11], the points-to set for a pointer p computed by Landi and
Ryder's algorithm may contain locations that are not in the points-to set for p
computed by the three flow-insensitive algorithms. Thus, Andersen's algorithm
provides a smaller alias solution than Landi and Ryder's algorithm for loader
and space.
Study 2. In study 2, we investigate the impact of the alias information provided
by the four algorithms on the size and the cost of the construction of one
program representation - the system-dependence graph [10]. 8 We study the
average number of summary edges per call and the cost to compute these summary
edges in a system dependence graph. The summary edges are computed by
slicing through each procedure with respect to each memory location that can
be modified by the procedure using Harrold and Ci's slicer [7]. Thus, the time
required to compute the summary edges might differ from the time required to
compute the summary edges using other methods (e.g. [10]). Nevertheless, this
approach provides a fair way to compare the costs of computing summary edges
using alias information of different precision levels.
Table
2 shows the results of this study. We obtained these results on a Sun
Ultra workstation with 640MB physical memory and 1GB virtual memory.
The table shows that using more precise alias information provided by FICS,
Andersen's algorithm, and Landi and Ryder's algorithm can effectively reduce
both the average number of summary edges per call and the time to compute the
summary edges in the construction of a system-dependence graph. 9 The table
further shows that, for our subject programs, using alias information provided
by FICS is almost as effective as using alias information provided by Andersen's
algorithm. Our algorithm is even more effective than Andersen's algorithm on
espresso because our algorithm computes a smaller points-to set for the pointer
variables. These results suggest that FICS is preferable to Andersen's algorithm
in building system-dependence graphs for large programs because FICS can run
significantly faster than Andersen's algorithm on large programs.
Study 3. In study 3, we investigate the impact of the alias information provided
by the four alias-analysis algorithms on the sizes of the slices and the cost of
computing the slices. We obtained the slices by running Harrold and Ci's slicer
on each slicing criterion of interest, without stored reuse information.
Table
3 shows the results of this study. We obtained these results on a Sun Ultra
workstation with 640MB physical memory and 1GB virtual memory. The
table shows that, for all the subject programs, using more precise alias information
than that computed by Steensgaard's algorithm can significantly reduce the
time to compute a slice. The table also shows that, for four programs, using more
precise alias information can significantly (? 10%) reduce the sizes of the slices.
These four programs illustrate exceptions to the conclusion drawn by Shapiro
and Horwitz [13] that the sizes of slices are hardly affected by the precision of the
alias information. Note that for five of the programs, the slicer computes larger
slices using alias information provided by Landi and Ryder's algorithm than
using that provided by FICS and Andersen's algorithm because the points-to
set computed by Landi and Ryder's algorithm for a pointer p contains memory
locations that are not in the points-to set computed by Steensgaard's, FICS, or
Andersen's algorithms for p. The table further shows that using alias information
8 A system-dependence graph can be used to slice a program.; computing summary
edges is the most expensive part of constructing such a graph.
9 Similar results of time were reported in [13] where Steensgaard's, Shapiro's and
Andersen's algorithms were compared.
Table
3. Average size of a slice (S) and average time (T) in seconds to compute a slice.
Raw Data % of Steensgaard
program ST FICS AND LR FICS AND LR
loadery 207 5.3 192 3.4 192 3.3 194 3.5 93.0 64.1 93.0 63.4 93.8 66.5
ansitapey 290 16.6 284 9.6 277 5.3 300 4.9 98.1 58.1 95.7 32.2 103.5 29.7
dixiey 705 25.5 704 8.3 704 5.9 699 5.5 99.9 32.7 99.9 23.1 99.2 21.7
learny 442 25.4 442 17.6 442 11.4 440 16.8 100.0 69.0 99.9 44.9 99.5 66.0
unzipy 808 37.5 807 13.1 807 10.8 805 9.3 99.9 35.0 99.8 28.9 99.6 24.9
smaily 738 176.5 637 96.1 635 75.4 - 86.3 54.5 86.1 42.7 -
simulatory 1258 54.8 1087 22.5 1087 22.7 1151 24.2 86.4 41.1 86.4 41.3 91.5 44.2
flexz 2025 220.2 2019 167.3 2019 153.8 2002 159.8 99.7 76.0 99.7 69.9 98.9 72.6
spacez 2234 1373.9 1936 573.5 1936 569.8 2086 467.3 86.7 41.7 86.7 41.5 93.4 34.0
bisonz 2394 94.9 2394 84.1 2338 41.0 - 100.0 88.6 97.7 43.2 -
larnz 6626 3477.3 6602 1075.6 6592 902.4 - 99.6 30.9 99.5 26.0 -
mpeg playz 5708 325.5 3935 134.6 3935 139.5 - 68.9 41.3 68.9 42.9 -
espressoz 6297 8332.1 6291 3776.5 6264 5367.1 - 99.9 45.3 99.5 64.4 -
y Data are collected from all the slices of the program. z Data are collected from one slice.
provided by FICS is almost as effective as using alias information provided by
Andersen's algorithm in computing slices. This further supports our conclusion
that FICS is preferable to Andersen's algorithm in whole-program analysis.
5 Related Work
Many data-flow analysis algorithms (e.g., [9, 10]), including FICS, use a two-phase
interprocedural analysis framework: in the first phase, information is propagated
from the called procedures to the calling procedures, and when a call
statement is encountered, summaries about the called procedure are used to
avoid propagating information into the called procedure; in the second phase,
information is propagated from the calling procedures to the called procedures.
Recently, Chatterjee et al. [4] use unknown initial values for parameters and
global variables so that the summaries about a procedure can be computed
for flow-sensitive alias analysis. 10 Then, they use the two-phase interprocedural
analysis framework to compute flow- and context-sensitive alias information.
Although their algorithm can improve the worst case complexity over Landi
and Ryder's algorithm [11] while computing alias information with the same
precision, it is still too costly in practice. Furthermore, because no comparison
between these two algorithms is reported, it is not known how much Chatterjee
et al.'s algorithm outperforms Landi and Ryder's algorithm.
There have been a number of attempts to design algorithms to compute alias
information with efficiency close to Steensgaard's algorithm and with precision
close to Andersen's algorithm. Shapiro and Horwitz [14] propose a method that
divides the program variables into k categories, and allows only variables belonging
to the same category to be in an equivalence class. Thus, similar to
FICS, this method computes smaller equivalence classes, and provides a smaller
points-to set for each pointer variable, than Steensgaard's algorithm. FICS differs
from this method, however, in that it uses an independent set of equivalence
Harrold and Rothermel used a similar approach in [8].
classes for each procedure. Thus, FICS can benefit from the fact that a procedure
references only a small set of program variables. FICS also differs from this
method in that FICS is context-sensitive (information is not propagated through
invalid call/return sequences). Finally, FICS differs from Shapiro and Horwitz's
algorithm in that FICS can handle the fields of structures, whereas in their al-
gorithm, assignments to a field of a structure are treated as assignments to the
structure. Because of this last difference, it is difficult to compare our experimental
results with theirs. However, from the experimental results reported
in Reference [14], it appears that, on average, FICS computes alias information
that is closer to Andersen's in precision than their algorithm.
6 Conclusions
We presented a flow-insensitive, context-sensitive points-to analysis algorithm
and conducted several empirical studies on more than 20 C programs to compare
our algorithm with other alias-analysis algorithms. The empirical results
show that, although Steensgaard's algorithm is fast, the alias information computed
by this algorithm is too imprecise to be used in whole-program analysis.
The empirical results further show that using more precise alias information provided
by our algorithm, Andersen's algorithm, and Landi and Ryder's algorithm
can effectively improve the precision and reduce the cost of whole-program anal-
ysis. However, the empirical results also show that Andersen's algorithm and
Landi and Ryder's algorithm could be too costly for analyzing large programs.
In contrast, the empirical results show that our algorithm can compute alias information
that is almost as precise as that computed by Andersen's algorithm,
with running time that is within six times that of Steensgaard's algorithm. Thus,
our algorithm may be more effective than the other algorithms in supporting
whole-program analysis.
Our future work includes performing additional empirical studies, especially
on large subject programs, to further compare our algorithm with other alias-
analysis algorithms. We will also conduct more studies to see how the imprecision
in the alias information computed by our algorithm can affect various whole-program
analyses.
Acknowledgements
This work was supported in part by grants from Microsoft, Inc. and by NSF
under NYI Award CCR-9696157 and ESS Award CCR-9707792 to Ohio State
University. We thank the anonymous reviewers who made many helpful suggestions
that improved the presentation of the paper.
--R
Program analysis and specialization for the C programming lan- guage
Effective whole-program analysis in the presence of pointers
Relevant context inference.
Programming Languages Research Group.
Separate computation of alias information for reuse.
Efficient computation of interprocedural definition-use chains
Interprocedural slicing using dependence graphs.
A safe approximate algorithm for interprocedural pointer aliasing.
The effects of the precision of pointer analysis.
Fast and accurate flow-insensitive points-to analysis
Efficient context-sensitive pointer analysis for C programs
Program decomposition for pointer analysis: A step toward practical analyses.
--TR
Interprocedural slicing using dependence graphs
A safe approximate algorithm for interprocedural aliasing
Efficient computation of interprocedural definition-use chains
Context-sensitive interprocedural points-to analysis in the presence of function pointers
Efficient context-sensitive pointer analysis for C programs
Separate Computation of Alias Information for Reuse
Points-to analysis in almost linear time
Program decomposition for pointer aliasing
Fast and accurate flow-insensitive points-to analysis
Effective whole-program analysis in the presence of pointers
Relevant context inference
Reuse-driven interprocedural slicing
Flow-Insensitive Interprocedural Alias Analysis in the Presence of Pointers
The Effects of the Presision of Pointer Analysis
Points-to Analysis by Type Inference of Programs with Structures and Unions
--CTR
Donglin Liang , Maikel Pennings , Mary Jean Harrold, Evaluating the precision of static reference analysis using profiling, ACM SIGSOFT Software Engineering Notes, v.27 n.4, July 2002
Atanas Rountev , Satish Chandra, Off-line variable substitution for scaling points-to analysis, ACM SIGPLAN Notices, v.35 n.5, p.47-56, May 2000
Donglin Liang , Mary Jean Harrold, Light-weight context recovery for efficient and accurate program analyses, Proceedings of the 22nd international conference on Software engineering, p.366-375, June 04-11, 2000, Limerick, Ireland
Manuel Fhndrich , Jakob Rehof , Manuvir Das, Scalable context-sensitive flow analysis using instantiation constraints, ACM SIGPLAN Notices, v.35 n.5, p.253-263, May 2000
Manuvir Das, Unification-based pointer analysis with directional assignments, ACM SIGPLAN Notices, v.35 n.5, p.35-46, May 2000
Donglin Liang , Maikel Pennings , Mary Jean Harrold, Extending and evaluating flow-insenstitive and context-insensitive points-to analyses for Java, Proceedings of the 2001 ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, p.73-79, June 2001, Snowbird, Utah, United States
Donglin Liang , Mary Jean Harrold, Equivalence analysis: a general technique to improve the efficiency of data-flow analyses in the presence of pointers, ACM SIGSOFT Software Engineering Notes, v.24 n.5, p.39-46, Sept. 1999
Markus Mock , Manuvir Das , Craig Chambers , Susan J. Eggers, Dynamic points-to sets: a comparison with static analyses and potential applications in program understanding and optimization, Proceedings of the 2001 ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, p.66-72, June 2001, Snowbird, Utah, United States
Ana Milanova , Atanas Rountev , Barbara G. Ryder, Precise Call Graphs for C Programs with Function Pointers, Automated Software Engineering, v.11 n.1, p.7-26, January 2004
Jamieson M. Cobleigh , Lori A. Clarke , Leon J. Osterweil, The right algorithm at the right time: comparing data flow analysis algorithms for finite state verification, Proceedings of the 23rd International Conference on Software Engineering, p.37-46, May 12-19, 2001, Toronto, Ontario, Canada
Markus Mock , Darren C. Atkinson , Craig Chambers , Susan J. Eggers, Program Slicing with Dynamic Points-To Sets, IEEE Transactions on Software Engineering, v.31 n.8, p.657-678, August 2005
David J. Pearce , Paul H. J. Kelly , Chris Hankin, Efficient field-sensitive pointer analysis for C, Proceedings of the ACM-SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, June 07-08, 2004, Washington DC, USA
Markus Mock , Darren C. Atkinson , Craig Chambers , Susan J. Eggers, Improving program slicing with dynamic points-to data, ACM SIGSOFT Software Engineering Notes, v.27 n.6, November 2002
Markus Mock , Darren C. Atkinson , Craig Chambers , Susan J. Eggers, Improving program slicing with dynamic points-to data, Proceedings of the 10th ACM SIGSOFT symposium on Foundations of software engineering, November 18-22, 2002, Charleston, South Carolina, USA
Donglin Liang , Mary Jean Harrold, Equivalence analysis and its application in improving the efficiency of program slicing, ACM Transactions on Software Engineering and Methodology (TOSEM), v.11 n.3, p.347-383, July 2002
Sigmund Cherem , Radu Rugina, Region analysis and transformation for Java programs, Proceedings of the 4th international symposium on Memory management, October 24-25, 2004, Vancouver, BC, Canada
Amie L. Souter , Lori L. Pollock, OMEN: A strategy for testing object-oriented software, ACM SIGSOFT Software Engineering Notes, v.25 n.5, p.49-59, Sept. 2000
Michael Hind , Anthony Pioli, Which pointer analysis should I use?, ACM SIGSOFT Software Engineering Notes, v.25 n.5, p.113-123, Sept. 2000
Chris Lattner , Andrew Lenharth , Vikram Adve, Making context-sensitive points-to analysis with heap cloning practical for the real world, ACM SIGPLAN Notices, v.42 n.6, June 2007
Brian Hackett , Radu Rugina, Region-based shape analysis with tracked locations, ACM SIGPLAN Notices, v.40 n.1, p.310-323, January 2005
G. Ryder, A safe approximate algorithm for interprocedural pointer aliasing, ACM SIGPLAN Notices, v.39 n.4, April 2004
Alessandro Orso , Saurabh Sinha , Mary Jean Harrold, Classifying data dependences in the presence of pointers for program comprehension, testing, and debugging, ACM Transactions on Software Engineering and Methodology (TOSEM), v.13 n.2, p.199-239, April 2004
Sandrine Blazy, Specifying and Automatically Generating a Specialization Tool for Fortran 90, Automated Software Engineering, v.7 n.4, p.345-376, December 2000
Jong-Deok Choi , Manish Gupta , Mauricio J. Serrano , Vugranam C. Sreedhar , Samuel P. Midkiff, Stack allocation and synchronization optimizations for Java using escape analysis, ACM Transactions on Programming Languages and Systems (TOPLAS), v.25 n.6, p.876-910, November
Chris Lattner , Vikram Adve, Automatic pool allocation: improving performance by controlling data structure layout in the heap, ACM SIGPLAN Notices, v.40 n.6, June 2005
David Binkley , Mark Harman, Analysis and Visualization of Predicate Dependence on Formal Parameters and Global Variables, IEEE Transactions on Software Engineering, v.30 n.11, p.715-735, November 2004
Martin Hirzel , Daniel Von Dincklage , Amer Diwan , Michael Hind, Fast online pointer analysis, ACM Transactions on Programming Languages and Systems (TOPLAS), v.29 n.2, p.11-es, April 2007
Michael Hind, Pointer analysis: haven't we solved this problem yet?, Proceedings of the 2001 ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, p.54-61, June 2001, Snowbird, Utah, United States
David Binkley , Nicolas Gold , Mark Harman, An empirical study of static program slice size, ACM Transactions on Software Engineering and Methodology (TOSEM), v.16 n.2, p.8-es, April 2007
David J. Pearce , Paul H. J. Kelly , Chris Hankin, Online Cycle Detection and Difference Propagation: Applications to Pointer Analysis, Software Quality Control, v.12 n.4, p.311-337, December 2004
Barbara G. Ryder , William A. Landi , Philip A. Stocks , Sean Zhang , Rita Altucher, A schema for interprocedural modification side-effect analysis with pointer aliasing, ACM Transactions on Programming Languages and Systems (TOPLAS), v.23 n.2, p.105-186, March 2001
Baowen Xu , Ju Qian , Xiaofang Zhang , Zhongqiang Wu , Lin Chen, A brief survey of program slicing, ACM SIGSOFT Software Engineering Notes, v.30 n.2, March 2005 | pointer analysis;aliasing analysis;points-to graph |
319249 | Classifying Facial Actions. | AbstractThe Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. | Introduction
Facial expressions provide information not only about affective state, but also about cognitive activity,
temperament and personality, truthfulness, and psychopathology. The Facial Action Coding System
[23] is the leading method for measuring facial movement in behavioral science. FACS is currently performed
manually by highly trained human experts. Recent advances in image analysis open up the possibility of
automatic measurement of facial signals. An automated system would make facial expression measurement
more widely accessible as a tool for research and assessment in behavioral science and medicine. Such a
system would also have applications in human-computer interaction.
This paper presents a survey and comparison of recent techniques for facial expression recognition as applied
to automated FACS encoding. Recent approaches include measurement of facial motion through optic
flow [44, 64, 54, 26, 15, 43], and analysis of surface textures based on principal component analysis (PCA)
[17, 48, 40]. In addition, a number of methods that have been developed for representing faces for identity
recognition may also be powerful for expression analysis. These approaches are also included in the
present comparison. These include Gabor wavelets [20, 39], linear discriminant analysis [8], local feature
analysis [49], and independent component analysis [5, 4]. The techniques are compared on a single image
testbed. The analysis focuses on methods for face image representation (generation of feature vectors) and
the representations are compared using a common similarity measure and classifier.
1.1 The Facial Action Coding System
FACS was developed by Ekman and Friesen [23] in 1978 to objectively measure facial activity for behavioral
science investigations of the face. It provides an objective description of facial signals in terms of component
motions, or "facial actions." FACS was developed by determining from palpation, knowledge of anatomy, and
videotapes how the contraction of each of the facial muscles changed the appearance of the face (see Fig 1).
Ekman and Friesen defined Action Units, or AUs, to correspond to each independent motion of the face.
A trained human FACS coder decomposes an observed expression into the specific AUs that produced the
expression. FACS is coded from video, and the code provides precise specification of the dynamics (duration,
onset and offset time) of facial movement in addition to the morphology (the specific facial actions which
occur).
FACS continues to be the leading method for measuring facial expressions in behavioral science (see [25] for a
review). This system has been used, for example, to demonstrate differences between genuine and simulated
pain [19], differences between when people are telling the truth versus lying [22], and differences between the
facial signals of suicidal and non-suicidally depressed patients [34]. Although FACS is a promising approach,
a major impediment to its widespread use is the time required to both train human experts and to manually
score the video tape. It takes over 100 hours of training to achieve minimal competency on FACS, and
each minute of video tape takes approximately one hour to score. Automating FACS would make it more
widely accessible as a research tool. It would not only increase the speed of coding, it would also improve
the reliability, precision, and temporal resolution of facial measurement.
Figure
1: The Facial Action Coding System decomposes facial motion into component actions. The upper
facial muscles corresponding to action units 1, 2, 4, 6 and 7 are illustrated. Reprinted with permission from
Ekman & Friesen (1978).
Aspects of FACS have been incorporated into computer graphic systems for synthesizing facial expressions
(e.g. Toy Story [38]), and into facial muscle models for parameterizing facial movement [55, 44]. It is
important to distinguish FACS itself from facial muscle models that employ aspects of FACS. In particular,
there has been a tendency to confuse FACS with CANDIDE [55]. FACS is performed by human observers
using stop-motion video. Although there are clearly defined relationships between FACS and the underlying
facial muscles, FACS is an image-based method. Facial actions are defined by the image changes they produce
in video sequences of face images.
1.2 Automated facial expression measurement
Recent advances have been made in computer vision for automatic recognition of facial expressions in images.
The approaches that have been explored include analysis of facial motion [44, 64, 54, 26], measurements of
the shapes of facial features and their spatial arrangements [40, 66], holistic spatial pattern analysis using
techniques based on principal component analysis [17, 48, 40] graylevel pattern analysis using local spatial
filters [48, 66] and methods for relating face images to physical models of the facial skin and musculature [44]
[59, 42, 26]. The image analysis techniques in these systems are relevant to the present goals, but the systems
themselves are of limited use for behavioral science investigations of the face (see [31] for a discussion). Many
of these systems were designed with an objective of classifying facial expressions into a few basic categories
of emotion, such as happy, sad, or surprised. For basic science investigations of facial behavior itself, such
as studying the difference between genuine and simulated pain, an objective and detailed measure of facial
activity such as FACS is needed. Several computer vision systems explicitly parameterize facial movement
[64], and relate facial movements to the underlying facial musculature [44, 26], but it is not known whether
these descriptions are sufficient for describing the full range of facial behavior. For example, movement
parameters that were estimated from posed, prototypical expressions may not be appropriate descriptors
for spontaneous facial expressions, which differ from posed expressions in both their morphology and their
dynamics [31]. Furthermore, the relationship between these movement parameters and internal state has
not been investigated to the extent that FACS has been. There is over 20 years of behavioral data on the
relationships of facial action codes to emotion and to state variables such as deceit, interest, depression, and
psychopathology.
In addition to providing a tool for basic science research, a system that outputs facial action codes would
provide a strong basis for human-computer interaction systems. In natural interaction, prototypic expressions
of basic emotions occur relatively infrequently. Annoyance, for example, may be indicated by just a lowering
of the brows or tightening of the mouth. FACS provides a description of the basic elements of any facial
movement, analogous to phonemes in speech. Facial action codes also provide more detailed information
about facial behavior, including information about variations within an emotional category (e.g. vengeance
vs. resentment), variations in intensity (e.g. annoyance vs. fury), blends of two or more emotions (e.g.
happiness disgust ! smug), facial signals of deceit, signs of boredom or interest, and conversational
signals that provide emphasis to speech and information about syntax.
Explicit attempts to automate the facial action coding system involved tracking the positions of dots attached
to the face [35, 37]. A system that detects facial actions from image sequences without requiring application
of dots to the subjects face would have much broader utility. Efforts have recently turned to measuring facial
actions by image processing of video sequences [6, 4, 15]. Cohn and colleagues [15] achieved some success
for automated facial action coding by feature point tracking of a set of manually located points in the face
image (fiducial points). Here, we explore image representations based on full field analysis of the face image,
not just displacements of selected feature points. Techniques employing 2-D filters of image graylevels have
proven to be more effective than feature-based representations for identity recognition [13, 40] and expression
recognition [66]. In our previous work on automatic facial action coding [6, 3, 2] we found that full-field
representations of image textures and image motion provided more reliable indicators of facial actions than
task-specific feature measurements such as the increase of facial wrinkles in specific facial regions.
Several facial expression recognition systems have employed explicit physical models of the face [44, 59, 42,
26]. There are numerous factors that influence the motion of the skin following muscle contraction, and it
is difficult to accurately account for all of them in a deterministic model. Here, we take an image-based
approach in which facial action classes are learned directly from example image sequences of the actions,
bypassing the physical model. Image-based approaches have recently been advocated [11] and can successfully
accomplish tasks previously assumed to require mapping onto a physical model, such as expression synthesis,
face recognition across changes in pose, and synthesis across pose [12, 61].
Overview
This paper explores and compares approaches to face image representation. Section 3 presents the image
database used for the comparative study, and the image preprocessing techniques. We examined a number
of techniques that have been presented in the literature for processing images of faces, and compare their
performance on the task of facial action classification. These approaches were grouped into the following
classes: Analysis of facial motion, holistic spatial analysis, and local spatial analysis. Section 4 examines a
representation of facial motion based on optic flow. The technique is a correlation-based method with sub-pixel
accuracy [58]. Because local smoothing is commonly imposed on flow fields to clean up the signal, we
also examined the effects of local smoothing on classification of facial motion. Holistic spatial analysis is an
approach that employs image-dimensional graylevel texture filters. Many of these approaches employ data-driven
kernels learned from the statistics of the face image ensemble. These approaches include eigenfaces
[60, 17, 48, 40] and local feature analysis (LFA) [49], in which the kernels are learned through unsupervised
methods based on principal component analysis (PCA). Eigenface and LFA kernels are derived from the
second-order dependencies among the image pixels, whereas independent component analysis (ICA) learns
kernels from the high-order dependencies in addition to the second-order dependencies among the pixels
[5, 4, 2]. Another class of holistic kernel, Fisher's linear discriminants (FLD) [8], is learned through supervised
methods, and finds a class-specific linear projection of the images. Section 5 compares four representations
derived from holistic spatial analysis: Eigenfaces (PCA), LFA, ICA, and FLD. Local spatial analysis is an
approach in which spatially local kernels are employed to filter the images. These include predefined families
of kernels such as Gabor wavelets [20, 39, 66], and data-driven kernels learned from the statistics of small
image patches, such as local PCA [48]. Section 6 examines two representations based on the outputs of local
spatial filters: local PCA and a Gabor wavelet representation. The two local representations were further
compared via a hybrid representation, local PCA jets. Section 7 provides benchmarks for the performance
of the computer vision systems by measuring the ability of naive and expert human subjects to classify the
facial actions.
3 Image Database
We collected a database of image sequences of subjects performing specified facial actions. The full database
contains over 1100 sequences containing over 150 distinct actions, or action combinations, and 24 different
subjects. Each sequence contained six images, beginning with a neutral expression and ending with a high
magnitude muscle contraction. Trained FACS experts provided demonstrations and instructions to subjects
on how to perform each action. The selection of images was based on FACS coding of stop motion video. The
images were coded by three experienced FACS coders certified with high inter-coder reliability. The criterion
for acceptance of images was that the requested action and only the requested action was present. Sequences
containing rigid head motion detectable by a human observer were excluded. For this investigation, we used
data from 20 subjects and attempted to classify 12 actions: 6 upper face actions and 6 lower face actions.
Figure
2 for a summary of the actions examined. There were a total of 111 action sequences, (9, 10, 18,
20, 5, 18) respectively of the six upper face actions, and (8, 4, 4, 5, 4, 6) of the six lower face actions. The
actions were divided into upper and lower-face categories because facial actions in the lower face have little
influence on facial motion in the upper face, and vice versa [23] which allowed us to treat them separately.
The face was located in the first frame in each sequence using the centers of the eyes and mouth. These
Upper Face
Inner brow raiser
Outer brow raiser
4 Brow lower
5 Upper lid raiser
6 Cheek raiser
7 Lid tightener
Lower Face
9 Nose Wrinkler
Upper lip raiser
Lower lip depressor
stretcher
Figure
2: List of facial actions classified in this study. From left to right: Example cropped image of the
highest magnitude action, the ffi image obtained by subtracting the neutral frame (the first image in the
sequence), Action Unit number, and Action Unit name.
coordinates were obtained manually by a mouse click. Accurate image registration is critical to holistic
approaches such as principal component analysis. An alignment procedure similar to this one was found to
give the most accurate image registration during the FERET test [50]. The variance in the assigned feature
location using this procedure was 0.4 pixels in the 640 \Theta 480 pixel images. The coordinates from Frame 1
were used to register the subsequent frames in the sequence. We found in pilot investigations that rigid head
motion was smaller than the positional noise in the registration procedure. The three coordinates were used
to align the faces, rotate the eyes to horizontal, scale, and finally crop a window of 60 \Theta 90 pixels containing
the region of interest (upper or lower face). The aspect ratios of the faces were warped so that the eye and
mouth centers coincided across all images. It has been found that identity recognition performance using
principal component based approaches is most successful when the images are warped to remove variations
in facial shape [11, 62].
To control the variation in lighting between frames of the same sequence and in different sequences, we applied
a logistic filter with parameters chosen to match the statistics of the grayscale levels of each sequence [46].
This procedure enhanced the contrast, performing a partial histogram equalization on the images.
4 Optic Flow Analysis
The majority of work on facial expression recognition has focused on facial motion analysis through optic
flow estimation. In an early exploration of facial expression recognition, Mase [44] used optic flow to estimate
the activity in a subset of the facial muscles. Essa and Pentland [26] extended this approach, using optic flow
to estimate activity in a detailed anatomical and physical model of the face. Motion estimates from optic
flow were refined by the physical model in a recursive estimation and control framework, and the estimated
forces were used to classify the facial expressions. Yacoob & Davis [64] bypassed the physical model, and
constructed a mid-level representation of facial motion, such as "right mouth corner raises," directly from
the optic flow. These mid-level representations were classified into one of six facial expressions using a
set of heuristic rules. Rosenblum, Yacoob & Davis [54] expanded this system to model the full temporal
profile of facial expressions with radial basis functions, from initiation, to apex, and relaxation. Cohn et
al. [15] are developing a system for automatic facial action classification based on feature-point tracking.
The displacements of 36 manually located feature points are estimated using optic flow, and classified using
discriminant functions.
Here, optic flow fields were estimated by employing a correlation-based technique developed by Singh [58].
This algorithm produces flow fields with sub-pixel accuracy, and is comprised of two main components: 1)
extraction using luminance conservation constraints, 2) Local smoothing.
4.1 Local velocity extraction
We start with a sequence of three images at time use it to recover all the velocity
information available locally. For each pixel P(x; y) in the central image small window W p of
3 \Theta 3 pixels is formed around P . 2) A search area W s of 5 \Theta 5 pixels is considered around location (x; y) in
the other two images. 3) The correlation between W p and the corresponding window centered on each pixel
in W s is computed, thus giving the matching strength, or response, at each pixel in the search window W s .
At the end of this process W s is covered by a response distribution R in which the response at each point
gives the frequency of occurrence, or likelihood, of the corresponding value of velocity. Employing a constant
temporal model, the response distributions for the two windows corresponding to t (R
and R+1 ), are combined by . Velocity is then estimated using the weighted least squares
estimate in (1). Figure 3 shows an example flow field obtained by this algorithm.
4.2 Local smoothing
To refine the conservation constraint estimate U cc =(-u; -
v) obtained above, a local neighborhood estimate of
velocity, U , is defined as a weighted sum of the velocities in a neighborhood of P using a 5 \Theta 5 Gaussian mask.
Figure
3: Optic flow for AU1 extracted using local velocity information extracted by the correlation-based
technique, with no spatial smoothing.
An optimal estimate U of (u; v) should combine the two estimates U cc and U , from the conservation and
local smoothness constraints respectively. Since U is a point in (u; v) space, its distance from U , weighted by
its covariance matrix S , represents the error in the smoothness constraint estimate. Similarly, the distance
between U and U cc weighted by S cc represents the error due to conservation constraints. Computing U then,
amounts to simultaneously minimizing the two errors:
Since we do not know the true velocity, this estimate must be computed iteratively. To update the field we
use the equations [58]:
where U k
is the estimate derived from smoothness constraints at step k. The iterations stop when
4.3 Classification procedure
The following classification procedures were used to test the efficacy of each representation in this comparison
for facial action recognition. Each image analysis algorithm produced a feature vector, f . We employed a
simple nearest neighbor classifier in which the similarity S of a training feature vector, f t , and a novel feature
vector, f n , was measured as the cosine of the angle between them:
Classification performances were also evaluated using Euclidean distance instead of cosine as the similarity
measure and template matching instead of nearest neighbor as the classifier, where the templates consisted
of the mean feature vector for the training images. The similarity measure and classifier that gave the best
performance is indicated for each technique.
The algorithms were trained and tested using leave-one-out cross-validation, also known as the jack-knife
procedure, which makes maximal use of the available data for training. In this procedure, the image representations
were calculated multiple times, each time using images from all but one subject for training,
and reserving one subject for testing. This procedure was repeated for each of the 20 subjects, and mean
classification accuracy was calculated across all of the test cases.
Table
presents classification performances for the medium magnitude facial actions, which occur in the
middle of each sequence. Performance was consistently highest for the medium magnitude actions. Flow
fields were calculated from frames 2, 3, and 4 of the image sequence, and the performance of the brightness-
based algorithms are presented for frame 4 of each sequence. A class assignment is considered "correct" if it
is consistent with the labels assigned by human experts during image collection. The consistency of human
experts with each other on this image set is indicated by the agreement rates also shown in Table 1.
4.4 Optic flow performance
Best performance for the optic flow approach was obtained using the the cosine similarity measure and template
matching classifier. The correlation-based flow algorithm gave 85.6% correct classification performance.
Since optic flow is a noisy measure, many flow-based expression analysis systems employ regularization procedures
such as smoothing and quantizing. We found that spatial smoothing did not improve performance,
and instead degraded it to 53.1%. It appears that high spatial resolution optic flow is important for facial
action classification. In addition, the motion in facial expression sequences is nonrigid and can be highly discontinuous
due to the formation of wrinkles. Smoothing algorithms that are not sensitive to these boundaries
can be disadvantageous.
There are a variety of choices of flow algorithms, of which Singh's correlation-based algorithm is just one.
Also, it is possible that adding more data to the flow field estimate could improve performance. The results
obtained here, however, were comparable to the performance of other facial expression recognition systems
based on optic flow [64, 54]. Optic flow estimates can also be further refined, such as with a Kalman filter
in an estimation-and control framework (e.g. [26]). The comparison here addresses direct, image-based
representations that do not incorporate a physical model. Sequences of flow fields can also be analyzed using
dynamical models such as an HMMs or radial basis functions, (eg. [54]). Such dynamical models could
also be employed with texture-based representations. Here we compare all representations using the same
classifiers.
5 Holistic Analysis
A number of approaches to face image analysis employ data-driven kernels learned from the statistics of
the face image ensemble. Approaches such as Eigenfaces [60] employ principal component analysis, which is
an unsupervised learning method based on the second-order dependencies among the pixels. Second-order
dependencies are pixelwise covariances. Representations based on principal component analysis have been
applied successfully to recognizing facial identity [18, 60], classifying gender [17, 29], and recognizing facial
expressions [17, 48, 6].
Penev and Atick [49] recently developed a topographic representation based on second-order image dependencies
called local feature analysis (LFA). A representation based on LFA gave the highest performance
on the March 1995 FERET face recognition competition [51]. The LFA kernels are spatially local, but in
this paper we class this technique as holistic, since the image-dimensional kernels are derived from statistical
analysis over the whole image. Another holistic image representation that has recently shown to be effective
for identity recognition is based on Fisher's Linear discriminants (FLD) [8]. FLD is a supervised learning
method that uses second-order statistics to find a class-specific linear projection of the images. Representations
such as PCA (eigenfaces), LFA, and FLD do not address high-order statistical dependencies in the
image. A representation based on independent component analysis (ICA) was recently developed which
is based on the high-order in addition to the second-order dependencies in the images [5, 4, 2]. The ICA
representation was found to be superior to the Eigenface (PCA) representation for classifying facial identity.
The holistic spatial analysis algorithms examined in this section each found a set of n-dimensional data-driven
image kernels, where n is the number of pixels in each image. The analysis was performed on the
difference (or ffi) images (Figure 2), obtained by subtracting the first image in a sequence (neutral frame) from
all of the subsequent frames in each sequence. Advantages of difference images include robustness to changes
in illumination, removal of surface variations between subjects, and emphasis of the dynamic aspects of the
image sequence [46]. The kernels were derived from low, medium, and high magnitude actions. Holistic
kernels for the upper and lower-face subimages were calculated separately.
The methods in this section begin with a data matrix X where the ffi-images were stored as row vectors x j ,
and the columns had zero mean. In the following descriptions, n is the number of pixels in each image, N
is the number of training images and p is the number of principal components retained to build the final
representation.
5.1 Principal Component Analysis: "EigenActions"
This approach is based on [17] and [60], with the primary distinction in that we performed principal component
analysis on the dataset of difference images. The principal components were obtained by calculating
the eigenvectors of the pixelwise covariance matrix, S, of the ffi-images, X . The eigenvectors were found
by decomposing S into the orthogonal matrix P and diagonal matrix D: . Examples of the
Figure
4: First 4 principal components of the difference images for the upper face actions (a), and lower face
actions (b). Components are ordered left to right, top to bottom.
eigenvectors are shown in Figure 4. The zero-mean ffi-frames of each sequence were then projected onto the
first p eigenvectors in P , producing a vector of p coefficients for each image.
Best performance with the holistic principal component representation, 79.3% correct, was obtained with the
first principal components, using the Euclidean distance similarity measure and template matching clas-
sifier. Previous studies (e.g. [8]) reported that discarding the first 1 to 3 components improved performance.
Here, discarding these components degraded performance.
5.2 Local Feature Analysis (LFA)
Local Feature Analysis (LFA) defines a set of topographic, local kernels that are optimally matched to the
second-order statistics of the input ensemble [49]. The kernels are derived from the principal component
axes, and consist of "sphering" the PCA coefficients to equalize their variance [1], followed by a rotation to
pixel space. We begin with the zero-mean matrix of ffi\Gammaimages, X , and calculate the principal component
eigenvectors P according to defined a set of kernels, K as
are the eigenvalues of S. The rows of K contain the kernels. The kernels were found to have spatially
local properties, and are "topographic" in the sense that they are indexed by spatial location [49]. The kernel
matrix K transforms X to the LFA output O = KX T (see Figure 5). Note that the matrix V is the inverse
square root of the covariance matrix of the principal component coefficients. This transform spheres the
principal component coefficients (normalizes their output variance to unity) and minimizes correlations in
the LFA output. Another way to interpret the LFA output O is that it is the image reconstruction using
sphered PCA coefficients,
5.2.1 Sparsification of LFA
LFA produces an n dimensional representation, where n is the number of pixels in the images. Since we have
outputs described by p !! n linearly independent variables, there are residual correlations in the output.
a. b. c.
Figure
5: a. An original ffi-image, b. its corresponding LFA output O(x), and c. the first 155 filter locations
selected by the sparsification algorithm superimposed on the mean upper face ffi-image.
Penev & Atick presented an algorithm for reducing the dimensionality of the representation by choosing a
subset M of outputs that were as decorrelated as possible. The sparsification algorithm was an iterative
algorithm based on multiple linear regression. At each time step, the output point that was predicted most
poorly by multiple linear regression on the points in M was added to M. Due to the topographic property
of the kernels, selection of output points was equivalent to selection of kernels for the representation.
The methods in [49] addressed image representation but did not address recognition. The sparsification
algorithm in [49] selected a different set of kernels, M, for each image, which is problematic for recognition.
In order to make the representation amenable to recognition, we selected a single set M of kernels for all
images. At each time step, the kernel corresponding to the pixel with the largest mean reconstruction error
across all images was added to M.
At each step, the kernel added to M is chosen as the kernel corresponding to location
where O rec is a reconstruction of the complete output, O, using a linear predictor on the subset of the
outputs O generated from the kernels in M. The linear predictor is of the form:
is the vector of the regression parameters, and
the subset of O corresponding to the points in M for all N images. 1
fi is calculated from:
Equation 8 can also be expressed in terms of the correlation matrix of the outputs, O, as in [49]:
The termination condition was Figure 5 shows the locations of the points selected by the
sparsification algorithm for the upper-face images. We evaluated classification performance using the first i
kernels selected by the sparsification algorithm, up to
The local feature analysis representation attained 81.1% correct classification performance. Best performance
was obtained using the first 155 kernels, the cosine similarity measure, and nearest neighbor classifier.
Classification performance using LFA was not significantly different from the performance using global PCA.
Although a face recognition algorithm related to LFA outperformed eigenfaces in the March 1995 FERET
competition [51], our results suggest that an aspect of the algorithm other than the LFA representation
accounts for the difference in performance. The exact algorithm used in the FERET test has not been
disclosed.
5.3 "FisherActions"
This approach is based on the original work by Belhumeur and others [8] that showed that a class-specific
linear projection of a principal components representation of faces improved identity recognition performance.
The method is based on Fisher's linear discriminant (FLD) [28], which projects the images into a subspace
in which the classes are maximally separated. FLD assumes linear separability of the classes. For identity
recognition, the approach relied on the assumption that images of the same face under different viewing
conditions lie in an approximately linear subspace of the image space, an assumption which holds true for
changes in lighting if the face is modeled by a Lambertian surface [56, 32]. In our dataset, the lighting
conditions are fairly constant and most of the variation is suppressed by the logistic filter. The linear
assumption for facial expression classification is that the ffi\Gammaimages of a facial action across different faces lie
in a linear subspace.
Fisher's Linear Discriminant is a projection into a subspace that maximizes the between-class scatter while
minimizing the within-class scatter of the projected data. Let - \Delta
be the set of all
data, divided into c classes. Each class - i is composed of a variable number of images x i 2 R n . The
between-class scatter matrix SB and the inter-class scatter SW are defined as
c
c
is the mean image of class - i and - is the mean of all data. W opt projects R n 7! R c\Gamma1 and satisfies
The fw i g are the solutions to the generalized eigenvalues problem SBw
Following [8], the calculations are greatly simplified by first performing PCA on the total scatter matrix
to project the feature space to R p . Denoting the PCA projection matrix W pca , we project
SW and SB :
~
pca SBW pca and ~
pca
The original FLD problem is thus reformulated as:
PCA:
Figure
projections of three lower-face action classes onto two dimensions. FLD projections
are slightly offset for visibility. FLD projected each class to a single point.
From 11 and 13, W fld , and the fw 0
i g can now be calculated using
~
~
SW is full-rank for p -
Best performance was obtained by choosing principal components to first reduce the dimensionality
of the data. The data was then projected down to 5 dimensions via the projection matrix, W fld . Best
performance of 75.7% correct was obtained with the Euclidean distance similarity measure and template
matching classifier.
Clustering with FLD is compared to PCA in Figure 6. As an example, three lower face actions were projected
down to c dimensions using FLD and PCA. The FLD projection virtually eliminated within-class
scatter of the training set, and the the exemplars of each class were projected to a single point. The three
actions in this example were 17, 18, and 9+25.
Contrary to the results obtained in [8], Fisher's Linear Discriminants did not improve classification over basic
PCA (eigenfaces), despite providing a much more compact representation of the data that optimized linear
discrimination. This suggests that the linear subspace assumption was violated more catastrophically for
our dataset than for the dataset in [8] which consisted of faces under different lighting conditions. Another
reason for the difference in performance may be due to the problem of generalization to novel subjects. The
FLD method achieved the best performance on the training data (close to 100%) but generalized poorly
to new individuals. This is consistent with other reports of poor generalization to novel subjects [14] (also
H. Wechsler, personal communication). Good performance with FLD has only been obtained when other
images of the test subject were included in the training set. The low dimensionality may provide insufficient
degrees of freedom for linear discrimination between classes of face images [14]. Class discriminations that are
approximately linear in high dimensions may not be linear when projected down to as few as 5 dimensions.
5.4 Independent Component Analysis
Representations such as eigenfaces, LFA, and FLD are based on the second-order dependencies of the image
set, the pixelwise covariances, but are insensitive to the high-order dependencies of the image set. High-order
Unknown
Sources
Images Separated
Sources
A W
Unknown
Mixing
Process
Learned
Weights
Figure
7: Image synthesis model for the ICA representation.
dependencies in an image include nonlinear relationships among the pixel grayvalues such as edges, in which
there is phase alignment across multiple spatial scales, and elements of shape and curvature. In a task such as
facial expression analysis, much of the relevant information may be contained in the high-order relationships
among the image pixels. Independent component analysis (ICA) is a generalization of PCA which learns
the high-order moments of the data in addition to the second-order moments. In a direct comparison, a face
representation based on ICA outperformed PCA for identity recognition. The methods in this section are
based on [5, 4, 2].
The independent component representation was obtained by performing "blind separation" on the set of face
images [5, 4, 2]. In the image synthesis model of Figure 7, the ffi - images in the rows of X are assumed to
be a linear mixture of an unknown set of statistically independent source images S, where A is an unknown
mixing matrix. The sources are recovered by a learned unmixing matrix W , which approximates A \Gamma1 and
produces statistically independent outputs, U .
The ICA unmixing matrix W was found using an unsupervised learning algorithm derived from the principle
of optimal information transfer between neurons [9, 10]. The algorithm maximizes the mutual information
between the input and the output of a nonlinear transfer function g. A discussion of how information
maximization leads to independent outputs can be found in [47, 9, 10]. Let x is a column of
the image matrix X , and g(u). The update rule for the weight matrix, W , is given by
We employed the logistic transfer function,
1+e \Gammau , giving y Convergence is greatly
speeded by including a "sphering" step prior to learning [10], in which the zero-mean dataset X is passed
through the whitening filter,
2 . This removes both the first and the second-order depen-
Figure
8: Sample ICA basis images.
dencies from the data. The full transform was therefore I is the weight obtained by
information maximization in Equation 14.
The projection of the image set onto each weight vector in W produced an image of the statistical dependencies
that each weight vector learned. These images are the rows of the output matrix U , and examples
are shown in Figure 8. The rows of U are the independent components of the image set, and they provided
a basis set for the expression images. The ICA representation consisted of the coefficients, a, for the linear
combination of basis images in U that comprised each face image in X . These coefficients were obtained from
the rows of the estimated mixing matrix A \Delta
The number of independent components extracted
by the ICA algorithm corresponds with the number of input images. Two hundred independent components
were extracted for the upper and 155 for the lower face image sets. Since there were more than 200 upper
face images, ICA was performed on 200 linear mixtures of the faces without affecting the image synthesis
model. The first 200 PCA eigenvectors were chosen for these linear mixtures since they give the combination
of images that accounts for the maximum variability among the pixels. The eigenvectors were normalized
to unit length. Details are available in [4, 2].
Unlike PCA, there is no inherent ordering to the independent components of the
dataset. We therefore selected as an ordering parameter the class discriminability of each compo-
nent. Let a k be the overall mean of coefficient a k , and a jk be the mean for action j. The ratio of
between-class to within-class variability, r, for each coefficient is defined as
oe within
where oe
is the variance of the j class means, and oe
is the
sum of the variances within each class. The first p components selected by class discriminability comprised
the independent component representation.
Best performance of 95.5% was obtained with the first 75 components selected by class discriminability,
using the cosine similarity measure, and nearest neighbor classifier. Independent component analysis gave
the best performance among all of the holistic classifiers. Note, however, that the independent component
images in Figure 8 were local in nature. As in LFA, the ICA algorithm analyzed the images as whole, but
the basis images that the algorithm learned were local. Two factors contributed to the local property of
the ICA basis images: Most of the statistical dependencies were in spatially proximal image locations, and
secondly, the ICA algorithm produces sparse outputs [10].
6 Local Representations
In the approaches described in Section 5, the kernels for the representation were learned from the statistics
of the entire image. There is evidence from a number of sources that local spatial filters may be superior to
global spatial filters for facial expression classification. Padgett & Cottrell [48] found that "eigenfeatures",
consisting of the principal components of image subregions containing the mouth and eyes, were more effective
than global PCA (full-face eigenfaces) for facial expression recognition. Furthermore, they found that a set
of shift-invariant local basis functions derived from the principal components of small image patches were
more effective than both eigenfeatures and global PCA. This finding is supported by Gray, Movellan &
who found that a similar local PCA representation gave better performance than global PCA
for lipreading from video. Principal component analysis of image patches sampled from random locations,
such that the image statistics are stationary over the patch, describes the amplitude spectrum [27, 53].
An alternative to adaptive local filters such as local PCA are pre-defined wavelet decompositions such as
families of Gabor filters. Gabor filters are obtained by modulating a 2-D sine wave with a Gaussian envelope.
Such filters remove most of the variability in images due to variation in lighting and contrast, and closely
model the response properties of visual cortical cells [52, 36, 21, 20]. Representations based on the outputs of
families of Gabor filters at multiple spatial scales, orientations, and spatial locations, have proven successful
for recognizing facial identity in images [39, 50]. In a direct comparison of face recognition algorithms, Gabor
filter representations gave better identity recognition performance than representations based on principal
component analysis [65]. A Gabor representation was also more effective than a representation based on the
geometric locations of facial features for expression recognition [66].
Section 6 explores local representations based on filters that act on small spatial regions within the images.
We examined three variations on local filters that employ PCA, and compared them to the biologically
inspired Gabor wavelet decomposition.
A simple benchmark for the local filters consisted of a single Gaussian kernel. The ffi - images were convolved
with a 15 \Theta 15 Gaussian kernel and the output was downsampled by a factor of 4. The dimensionality of the
final representation was n
4 . The output of this basic local filter was classified at 70.3% accuracy using the
Euclidean distance similarity measure and template matching classifier.
6.1 Local PCA
This approach is based on the local PCA representation that was found to outperform global PCA for
expression recognition [48]. The shift-invariant local basis functions employed in [48] were derived from the
a. b.
Figure
9: a. Shift-invariant local PCA kernels. First 9 components, ordered left to right, top to bottom. b.
Shift-variant local PCA kernels. The first principal component is shown for each image location.
principal components of small image patches from randomly sampled locations in the face image. A set of
more than 7000 patches of size 15 \Theta 15 was taken from random locations in the ffi - images and decomposed
using PCA. The first p principal components were then used as convolution kernels to filter the full images.
The outputs were subsequently downsampled by a factor of 4, such that the final dimensionality of the
representation was isomorphic to R p\Thetan=4 . The local PCA filters obtained from the set of lower-face ffi-images
are shown in Figure 9.
Performance improved by excluding the first principal component. Best performance of 73.4% was obtained
with principal components 2-30, using Euclidean distance and template matching. Unlike the findings in
[48], shift invariant basis functions obtained through local PCA were no more effective than global PCA
for facial action coding. Performance of this local PCA technique was not significantly higher than that
obtained using a single 15x15 Gaussian kernel.
Because the local PCA implementation differed from global PCA in two properties spatial locality and image
alignment, we repeated the local PCA analysis at fixed spatial locations. PCA of location-independent
images captures amplitude information without phase, whereas alignment of the images provides implicit
phase information [27, 10]. local PCA at fixed image locations is related to the eigenfeatures representation
addressed in [48]. The eigenfeature representation in [48] differed from shift-invariant local PCA in image
patch size. Here, we compare shift-invariant and shift-variant versions of local PCA while controlling for
patch size.
The images were divided into m - n
fixed regions. The principal components of each region were
calculated separately. Each image was thus represented by p \Theta m coefficients. The final representation
consisted of principal components of regions.
Classification performance was tested using up to the first 30 components of each patch. Best performance of
78.3% was obtained with the first 10 principal components of each image patch, using Euclidean distance and
the nearest neighbor classifier. There is a trend for phase alignment to improve classification performance
using local PCA, but the difference is not statistically significant. Contrary to the findings in [48] neither
local PCA representation outperformed the global PCA representation. It has been proposed that local
representations reduce sensitivity to identity-specific aspects of the face image [48, 30]. The success of
global PCA here could be attributable to the use of ffi images, which reduced variance related to identity
specific aspects of the face image. Another reason for the difference in findings could be the method of
downsampling. Padgett and Cottrell selected filter outputs from 7 image locations at the eyes and mouth,
whereas here downsampling was performed in a grid-wise fashion from 48 image locations.
6.2 Gabor wavelet representation
Here we examine pre-defined local filters based on the Gabor wavelet decomposition. This representation
was based on the methods described in [39]. Given an image I(~x) (where the transform J i is
defined as a convolution
Z
with a family of Gabor kernels / i
Each / i is a plane wave characterized by the vector ~ k i enveloped by a Gaussian function, where the parameter
determines the ratio of window width to wavelength. The first term in the square brackets determines
the oscillatory part of the kernel, and the second term compensates for the DC value of the kernel [39]. The
vector ~ k i is defined as
where
2 -; and '-
The parameters - and - define the frequency and orientation of the kernels. We used 5 frequencies
and 8 orientations, in the final representation, following the methods in [39]. Example filters
are shown in Figure 10. The Gabor filters were applied to the ffi-images. The outputs fJ i g of the 40 Gabor
filters were downsampled by a factor q to reduce the dimensionality to 40 \Theta n
q , and normalized to unit
length, which performed a divisive contrast normalization. We tested the performance of the system using
and found that yielded the best generalization rate. Best performance was obtained with
the cosine similarity measure and nearest neighbor classifier.
Classification performance with the Gabor filter representation was 95.5%. This performance was significantly
higher than all other approaches in the comparison except independent component analysis, with
which it tied. This finding is supported by Zhang, Yan, & Lades [65] who found that face recognition with
the Gabor filter representation was superior to that with a holistic principal component based representation.
To determine which frequency ranges contained more information for action classification, we repeated the
tests using subsets of high frequencies low frequencies, Performance with
a. b. c.
Figure
10: a. Original ffi-image. b. Gabor kernels (low and high frequency) with the magnitude of the filtered
image to the right. c. Local PCA kernels (large and small scale) with the corresponding filtered image.
the high frequency subset was 92.8%, almost the same as for performance with the
low frequency subset was 83.8%. The finding that the higher spatial frequency bands of the Gabor filter
representation contain more information than the lower frequency bands is consistent with our analysis of
optic flow, above, in which reduction of the spatial resolution of the optic flow through smoothing had a
detrimental effect on classification performance. It appears that high spatial frequencies are important for
this task.
6.3 PCA jets
We next investigated whether the multiscale property of the Gabor wavelet representation accounts for the
difference in performance obtained using the Gabor representation and the local PCA representation. To
test this hypothesis, we developed a multiscale version of the local PCA representation, PCA jets. The
principal components of random subimage patches provide the amplitude spectrum of local image regions.
A multiscale local PCA representation was obtained by performing PCA on random image patches at five
different scales chosen to match the sizes of the Gaussian envelopes (see Figure 10). Patch sizes were chosen
as \Sigma3oe, yielding the following set: [ 9\Theta9, 15\Theta15, 23\Theta23, 35 \Theta 35, and 49 \Theta 49]. The number of filters was
matched to the Gabor representation by retaining principal components at each scale, for a total of 80
filters. The downsampling factor also chosen to match the Gabor representation.
As for the Gabor representation, performance was tested using the cosine similarity measure and nearest
neighbor classifier. Best results were obtained using eigenvectors 2 to 17 for each patch size. Performance
was 64.9% for all five scales, 72.1% for the three smaller scales, and 62.2% for the three larger scales.
The multiscale principal component analysis (PCA jets) did not improve performance over the single scale
local PCA. It appears that the multiscale property of the Gabor representation does not account for the
improvement in performance obtained with this representation over local representations based on principal
component analysis.
7 Human Subjects
The performance of human subjects provided benchmarks for the performances of the automated systems.
Most other computer vision systems test performance on prototypical expressions of emotion, which naive
human subjects can classify with over 90% agreement (e.g. [45]). Facial action coding is a more detailed
analysis of facial behavior than discriminating prototypical expressions. The ability of naive human subjects
to classify the facial action images in this set gives a simple indication of the difficulty of the visual
classification task, and provides a basis for comparing the results presented here with other systems in the
literature. Since the long-term goal of this project is to replace human expert coders with with an automated
system, a second benchmark was provided by the agreement rates of expert human coders on these images.
This benchmark indicated the extent to which the automated systems attained the goal of reaching the
consistency levels of the expert coders.
Naive subjects. Naive subjects were ten adult volunteers with no prior knowledge of facial expression
measurement. The upper and lower face actions were tested separately. Subjects were provided with a guide
sheet which contained an example image of each of the six upper or lower face actions along with a written
description of each action and a list of image cues for detecting and discriminating the actions from [23].
Each subject was given a training session in which the facial actions were described and demonstrated, and
the image cues listed on the guide sheet were reviewed and indicated on the example images. The subjects
kept the guide sheet as a reference during the task.
Face images were preprocessed identically to how they had been for the automated systems, as described
in Section 3, and printed using a high resolution HP Laserjet 4si printer with 600 dpi. Face images were
presented in pairs, with a neutral expression image and the test image presented side by side. Subjects were
instructed to compare the test image with the neutral image and decide which of the actions the subject
had performed in the test image. Ninety-three image pairs were presented in both the upper and lower face
tasks. Subjects were instructed to take as much time as they needed to perform the task, which ranged from
minutes to one hour. Naive subjects classified these images at 77.9% correct. Presenting uncropped face
images did not improve performance.
Expert coders. Expert subjects were four certified FACS coders. The task was identical to the naive
subject task with the following exceptions: Expert subjects were not given a guide sheet or additional
training, and the complete face was visible, as it would normally be during FACS scoring. Although the
complete action was visible in the cropped images, the experts were experienced with full face images, and
the cropping may bias their performance by removing contextual information. One hundred and fourteen
upper-face image pairs and ninety-three lower-face image pairs were presented. Time to complete the task
ranged from 20 minutes to 1 hour and 15 minutes. The rate of agreement of the expert coders with the
assigned labels was was 94.1%.
Optic Flow Correlation 85.6% \Sigma 3.3
Smoothed 53.1% \Sigma 4.7
PCA 79.3% \Sigma 3.9
Holistic LFA 81.1% \Sigma 3.7
Spatial Analysis FLD 75.7% \Sigma 4.1
ICA 95.5% \Sigma 2.0
Gaussian Kernel 70.3 \Sigma 4.
Spatial Analysis PCA Shift-var 78.3% \Sigma 3.9
PCA Jets 72.1% \Sigma 4.2
Gabor Jets 95.5% \Sigma 2.0
Human Subjects Naive 77.9% \Sigma 2.5
Expert 94.1% \Sigma2.1
Table
1: Best performance for each classifier. PCA: Principal component analysis. LFA: Local feature anal-
ysis. FLD: Fisher's linear discriminant. ICA: Independent component analysis. Shift-inv: Shift-invariant.
Shift-var: Shift-variant.
We have compared a number of different image analysis methods on a difficult classification problem, the
classification of facial actions. Several approaches to facial expression analysis have been presented in the
literature, but until now, there has been little direct comparison of these methods on a single dataset. These
approaches include analysis of facial motion [44, 64, 54, 26], holistic spatial pattern analysis using techniques
based on principal component analysis [17, 48, 40], and measurements of the shapes and facial features and
their spatial arrangements [40, 66]. This investigation compared facial action classification using optic flow,
holistic spatial analysis, and local spatial representations. We also included in our comparison a number of
representations that had been developed for facial identity recognition, and applied them for the first time
to facial expression analysis. These representations included Gabor filters [39], Linear Discriminant Analysis
[8], Local Feature Analysis [49], and Independent Component Analysis [4].
Best performances were obtained with the local Gabor filter representation, and the Independent Component
representation, which both achieved 96% correct classification. The performance of these two methods
equaled the agreement level of expert human subjects on these images. Image representations derived from
the second-order statistics of the dataset (PCA and LFA) performed about as well as naive human subjects
on this image classification task, in the 80% accuracy range. Performances using LFA and FLD did not
significantly differ from PCA, nor did spatially local implementations of PCA. Correlation-based optic flow
performed at a level between naive and expert human subjects, at 86%. Classification accuracies obtained
here compared favorably with other systems developed for emotion classification, despite the additional
challenges of classifying facial actions over classifying prototypical expressions reviewed in [31].
We obtained converging evidence that local spatial filters are important for analysis of facial expressions.
The two representations that significantly outperformed the others, the Gabor representation [39] and the
Independent Component representation [4], were based on local filters. ICA was classified as a holistic
algorithm, since the analysis was performed over the images as a whole. The basis images that the algorithm
produced, however, were local. Our results also demonstrated that spatial locality of the image filters alone
is insufficient for good classification. Local principal component representations such as LFA and local PCA
performed no better than the global PCA representation (Eigenfaces).
We also obtained multiple sources of evidence that high spatial frequencies are important for classifying facial
actions. Spatial smoothing of optic flow degraded performance by more than 30%. Secondly, classification
with only the high frequencies of the Gabor representation was superior to classification using only the low
spatial frequencies. A similar result was obtained with the PCA jets. These findings are in contrast to a recent
report that the information for recognizing prototypical facial expressions was carried predominantly by the
low spatial frequencies [66]. This difference in findings highlights the difference in the task requirements of
classifying facial actions versus classifying prototypical expressions of emotion. Classifying facial actions is a
more detailed level of analysis. Our findings predict, for example, that high spatial frequencies would carry
important information for discriminating genuine expressions of happiness from posed ones, which differ in
the presence of AU 6 (the cheek raiser) [24].
The relevance of high spatial frequencies has implications for motion-based facial expression analysis. Since
optic flow is a noisy measure, many flow-based expression analysis systems employ regularization procedures
such as smoothing and quantizing to estimate a principal direction of motion within an image region. The
analysis presented here suggests that high spatial resolution optic flow is important for analysis of facial
behavior at the level of facial action coding.
In addition to spatial locality, the ICA representation and the Gabor filter representation share the property
of redundancy reduction, and have relationships to representations in the visual cortex. The response
properties of primary visual cortical cells are closely modeled by a bank of Gabor filters [52, 36, 21, 20].
Relationships have been demonstrated between Gabor filters and independent component analysis. Bell
using ICA that the filters that produced independent outputs from natural scenes
were spatially local, oriented edge filters, similar to a bank of Gabor filters. It has also been shown that
Gabor filter outputs of natural images are at least pairwise independent [57]. This holds when the responses
undergo divisive normalization, which neurophysiologists have proposed takes place in the visual cortex [33].
The length normalization in our Gabor representation is a form of divisive normalization.
The Gabor wavelets, PCA, and ICA each provide a way to represent face images as a linear superposition
of basis functions. Gabor wavelets employ a set of pre-defined basis functions, whereas PCA and ICA learn
basis functions that are adapted to the data ensemble. PCA models the data as a multivariate Gaussian,
and the basis functions are restricted to be orthogonal [41]. ICA allows the learning of non-orthogonal bases
and allows the data to be modeled with non-Gaussian distributions [16]. As noted above, there are a number
of relationships between Gabor wavelets and the basis functions obtained with ICA. The Gabor wavelets are
not specialized to the particular data ensemble, but would be advantageous when the amount of data is too
small to estimate filters.
The ICA representation performed as well as the Gabor representation, despite having two orders of magnitude
fewer basis functions. A large number of basis functions does not appear to confer an advantage for
classification. The PCA-jet representation, which was matched to the Gabor representation for number of
basis functions as well as scale, performed at only 72% correct.
Each of the local representations underwent downsampling. The effect of downsampling on generalization
rate was examined in the Gabor representation, and we found downsampling improved generalization per-
formance. The downsampling was done in a grid-wise fashion, and there was no manual selection of facial
features. Comparison to representations based on individual facial features (or fiducial points) has been addressed
in recent work by Zhengyou Zhang [66] which showed that multiresolution Gabor wavelet coefficients
give better information than the geometric positions of fiducial points for facial expression recognition.
9 Conclusions
The results of this comparison provided converging evidence for the importance of using local filters, high
spatial frequencies, and statistical independence for classifying facial actions. Best performances were obtained
with Gabor wavelet decomposition and independent component analysis. These two representations
are related to each other. They employ graylevel texture filters that share properties of spatial locality,
independence, and have relationships to the response properties of visual cortical neurons.
The majority of the approaches to facial expression recognition by computer have focused exclusively on
analysis of facial motion. Motion is an important aspect of facial expressions, but not the only cue. Although
experiments with point-light displays have shown that human subjects can recognize facial expressions from
motion signals alone [7], recognition rates are just above chance, and substantially lower than those reported
for recognizing a similar set of expressions from static graylevel images (e.g. [45]). In this comparison, best
performances were obtained with representations based on surface graylevels. A future direction of this
work is to combine the motion information with spatial texture information. Perhaps combining motion and
graylevel information will ultimately provide the best facial expression recognition performance, as it does
for the human visual system [7, 63].
Acknowledgements
This research was supported by NSF Grant No. BS-9120868, Lawrence Livermore National Laboratories
Intra-University Agreement B291436, Howard Hughes Medical Institute, and NIH Grant
01. We are indebted to FACS experts Linda Camras, Wil Irwin, Irene McNee, Harriet Oster, and Erica
Rosenberg for their time and assistance with this project. We thank Beatrice Golomb, Wil Irwin, and Jan
Larsen for contributions to project initiation, Claudia Hilburn Methvin for image collection, and Laurenz
and Gary Cottrell for valuable discussions on earlier drafts of this paper.
--R
What does the retina know about natural scenes?
Face Image Analysis by Unsupervised Learning and Redundancy Reduction.
Measuring facial expressions by computer image analysis.
Independent component representations for face recog- nition
Viewpoint invariant face recognition using independent component analysis and attractor networks.
Classifying facial action.
Emotion recognition: The role of facial movement and the relative importance of upper and lower areas of the face.
Eigenfaces vs. fisherfaces: Recognition using class specific linear projection.
An information-maximization approach to blind separation and blind deconvolution
The independent components of natural scenes are edge filters.
Image representations for visual learning.
Example based image analysis and synthesis.
Face recognition: Features versus templates.
Discriminant analysis for face recognition.
Automated face coding: A computer-vision based method of facial expression analysis
Independent component analysis - a new concept? Signal Processing
Face recognition using unsupervised feature extraction.
Complete discrete 2d gabor transform by neural networks for image analysis and compression.
Spatial Vision.
Telling Lies: Clues to Deceit in the Marketplace
Facial Action Coding System: A Technique for the Measurement of Facial Movement.
Smiles when lying.
What the Face Reveals: Basic and Applied Studies of Spontaneous Expression using the Facial Action Coding System (FACS).
What is the goal of sensory coding?
The use of multiple measures in taxonomic problems.
neural network identifies sex from human faces.
A comparison of local versus global image decomposition for visual speechreading.
The essential behavioral science of the face and gesture that computer scientists need to know.
A Deformable Model for Face Recognition Under Arbitrary Lighting Conditions.
Nonlinear model of neural responses in cat visual cortex.
The faces of suicidal depression (translation).
An evaluation of the two dimensional gabor filter model of simple receptive fields in cat striate cortex.
Automated coding of facial behavior in human-computer interactions with facs
Serious business
Automatic interpretation and coding of face images using flexible models.
Inferring sparse
Recognition of facial expression from optical flow.
Emotional expression in upside-down faces: Evidence for configurational and componential processing
Visual speech recognition with stochastic networks.
Representing face images for emotion classification.
Local feature analysis: a general statistical theory for object representation.
The feret database and evaluation procedure for face-recognition algorithms
Phase relationship between adjacent simple cells in the visula cortex.
Digital Image Processing.
Human expression recognition from motion using a radial basis function network architecture.
CANDIDE: A parametrized face.
Geometry and Photometry in 3D Visual Recognition.
Statistical models for images: Compression
Optic Flow Computation.
Analysis and synthesis of facial image sequences using physical and anatomical models.
Eigenfaces for recognition.
Linear object classes and image synthesis from a single example image.
Separation of texture and shape in images of faces for image coding and synthesis.
Effects of distortion of spatial and temporal resolution of video stimuli on emotion attri- butions
Recognizing human facial expressions from long image sequences using optical flow.
Face recognition: Eigenface
--TR
--CTR
Lijun Yin , Johnny Loi , Wei Xiong, Facial expression representation and recognition based on texture augmentation and topographic masking, Proceedings of the 12th annual ACM international conference on Multimedia, October 10-16, 2004, New York, NY, USA
B. Braathen , M. S. Bartlett , G. Littlewort , J. R. Movellan, First steps towards automatic recognition of spontaneous facial action units, Proceedings of the 2001 workshop on Perceptive user interfaces, November 15-16, 2001, Orlando, Florida
Ce Zhan , Wanqing Li , Philip Ogunbona , Farzad Safaei, Facial expression recognition for multiplayer online games, Procedings of the 3rd Australasian conference on Interactive entertainment, p.52-58, December 04-06, 2006, Perth, Australia
Masakazu Matsugu , Katsuhiko Mori , Yusuke Mitari , Yuji Kaneda, Subject independent facial expression recognition with robust face detection using a convolutional neural network, Neural Networks, v.16 n.5-6, p.555-559, June
Chao-Fa Chuang , Frank Y. Shih, Rapid and Brief Communication: Recognizing facial action units using independent component analysis and support vector machine, Pattern Recognition, v.39 n.9, p.1795-1798, September, 2006
Tianming Hu , Liyanage C. De Silva , Kuntal Sengupta, A hybrid approach of NN and HMM for facial emotion classification, Pattern Recognition Letters, v.23 n.11, p.1303-1310, September 2002
Shyi-Chyi Cheng , Ming-Yao Chen , Hong-Yi Chang , Tzu-Chuan Chou, Semantic-based facial expression recognition using analytical hierarchy process, Expert Systems with Applications: An International Journal, v.33 n.1, p.86-95, July, 2007
V. Ioannou , Amaryllis T. Raouzaiou , Vasilis A. Tzouvaras , Theofilos P. Mailis , Kostas C. Karpouzis , Stefanos D. Kollias, Emotion recognition through facial expression analysis based on a neurofuzzy network, Neural Networks, v.18 n.4, p.423-435, May 2005
Jia-Jun Wong , Siu-Yeung Cho, Facial emotion recognition by adaptive processing of tree structures, Proceedings of the 2006 ACM symposium on Applied computing, April 23-27, 2006, Dijon, France
Lisa Gralewski , Neill Campbell , Barry Thomas , Colin Dalton , David Gibson , University of Bristol, Statistical synthesis of facial expressions for the portrayal of emotion, Proceedings of the 2nd international conference on Computer graphics and interactive techniques in Australasia and South East Asia, June 15-18, 2004, Singapore
Dan Roth , Ming-Hsuan Yang , Narendra Ahuja, Learning to recognize three-dimensional objects, Neural Computation, v.14 n.5, p.1071-1103, May 2002
Ira Cohen , Nicu Sebe , Ashutosh Garg , Lawrence S. Chen , Thomas S. Huang, Facial expression recognition from video sequences: temporal and static modeling, Computer Vision and Image Understanding, v.91 n.1-2, p.160-187, July
Ying-li Tian , Takeo Kanade , Jeffrey F. Cohn, Recognizing action units for facial expression analysis, Multimodal interface for human-machine communication, World Scientific Publishing Co., Inc., River Edge, NJ, 2002
Ying-li Tian , Takeo Kanade , Jeffrey F. Cohn, Recognizing Action Units for Facial Expression Analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.2, p.97-115, February 2001
Benjamn Hernndez , Gustavo Olague , Riad Hammoud , Leonardo Trujillo , Eva Romero, Visual learning of texture descriptors for facial expression recognition in thermal imagery, Computer Vision and Image Understanding, v.106 n.2-3, p.258-269, May, 2007
Hatice Gunes , Massimo Piccardi , Tony Jan, Face and body gesture recognition for a vision-based multimodal analyzer, Proceedings of the Pan-Sydney area workshop on Visual information processing, p.19-28, June 01, 2004
Yongmian Zhang , Qiang Ji, Active and Dynamic Information Fusion for Facial Expression Understanding from Image Sequences, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.5, p.699-714, May 2005
Mu-Chun Su , Yi-Jwu Hsieh , De-Yuan Huang, A simple approach to facial expression recognition, Proceedings of the 2007 annual Conference on International Conference on Computer Engineering and Applications, p.456-461, January 17-19, 2007, Gold Coast, Queensland, Australia
Matthew N. Dailey , Garrison W. Cottrell , Curtis Padgett , Ralph Adolphs, EMPATH: A Neural Network that Categorizes Facial Expressions, Journal of Cognitive Neuroscience, v.14 n.8, p.1158-1173, November 2002
Masood Mehmood Khan , Michael Ingleby , Robert D. Ward, Automated Facial Expression Classification and affect interpretation using infrared measurement of facial skin temperature variations, ACM Transactions on Autonomous and Adaptive Systems (TAAS), v.1 n.1, p.91-113, September 2006
Chengjun Liu, Gabor-Based Kernel PCA with Fractional Power Polynomial Models for Face Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.5, p.572-581, May 2004
Maja Pantic , Leon J. M. Rothkrantz, Automatic Analysis of Facial Expressions: The State of the Art, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.12, p.1424-1445, December 2000
Douglas W. Cunningham , Mario Kleiner , Heirich H. Blthoff , Christian Wallraven, The components of conversational facial expressions, Proceedings of the 1st Symposium on Applied perception in graphics and visualization, August 07-08, 2004, Los Angeles, California
Bruce A. Draper , Kyungim Baek , Marian Stewart Bartlett , J. Ross Beveridge, Recognizing faces with PCA and ICA, Computer Vision and Image Understanding, v.91 n.1-2, p.115-137, July
Douglas W. Cunningham , Mario Kleiner , Christian Wallraven , Heinrich H. Blthoff, Manipulating Video Sequences to Determine the Components of Conversational Facial Expressions, ACM Transactions on Applied Perception (TAP), v.2 n.3, p.251-269, July 05
Aleix M. Martnez, Recognizing Imprecisely Localized, Partially Occluded, and Expression Variant Faces from a Single Sample per Class, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.6, p.748-763, June 2002
Jeremy N. Bailenson , Andrew C. Beall , Jack Loomis , Jim Blascovich , Matthew Turk, Transformed Social Interaction: Decoupling Representation from Behavior and Form in Collaborative Virtual Environments, Presence: Teleoperators and Virtual Environments, v.13 n.4, p.428-441, August 2004
Rosalind W. Picard , Elias Vyzas , Jennifer Healey, Toward Machine Emotional Intelligence: Analysis of Affective Physiological State, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.10, p.1175-1191, October 2001
Zhonglong Zheng , Fan Yang , Wenan Tan , Jiong Jia , Jie Yang, Fast communication: Gabor feature-based face recognition using supervised locality preserving projection, Signal Processing, v.87 n.10, p.2473-2483, October, 2007
Florent Perronnin , Jean-Luc Dugelay , Kenneth Rose, A probabilistic model for face transformation with application to person identification, EURASIP Journal on Applied Signal Processing, v.2004 n.1, p.510-521, 1 January 2004
Ming-Hsuan Yang , David J. Kriegman , Narendra Ahuja, Detecting Faces in Images: A Survey, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.1, p.34-58, January 2002
Sylvie C. W. Ong , Surendra Ranganath, Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.6, p.873-891, June 2005
W. Zhao , R. Chellappa , P. J. Phillips , A. Rosenfeld, Face recognition: A literature survey, ACM Computing Surveys (CSUR), v.35 n.4, p.399-458, December
Florent Perronnin , Jean-Luc Dugelay , Kenneth Rose, A Probabilistic Model of Face Mapping with Local Transformations and Its Application to Person Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.7, p.1157-1171, July 2005
R. W. Picard , S. Papert , W. Bender , B. Blumberg , C. Breazeal , D. Cavallo , T. Machover , M. Resnick , D. Roy , C. Strohecker, Affective Learning A Manifesto, BT Technology Journal, v.22 n.4, p.253-269, October 2004 | principal component analysis;gabor wavelets;Facial Action Coding System;facial expression recognition;independent component analysis;computer vision |
319259 | Alignment Using Distributions of Local Geometric Properties. | AbstractWe describe a framework for aligning images without needing to establish explicit feature correspondences. We assume that the geometry between the two images can be adequately described by an affine transformation and develop a framework that uses the statistical distribution of geometric properties of image contours to estimate the relevant transformation parameters. The estimates obtained using the proposed method are robust to illumination conditions, sensor characteristics, etc., since image contours are relatively invariant to these changes. Moreover, the distributional nature of our method alleviates some of the common problems due to contour fragmentation, occlusion, clutter, etc. We provide empirical evidence of the accuracy and robustness of our algorithm. Finally, we demonstrate our method on both real and synthetic images, including multisensor image pairs. | Introduction
Image alignment (variously known as registration, positioning etc.) amounts to establishing a common
frame of reference for a set of images. Alignment is an essential step for many tasks such
as data fusion, change detection, pose recovery etc. and has been widely investigated in various
contexts. A good survey of the existing literature is [4]. Traditionally, the transformation required
to achieve image alignment is computed using either feature matching [13], or a search strategy
that optimises some meaningful image similarity measure (eg. mutual information [19], normalised
cross-correlation [10]). While feature matching methods can give very accurate solutions, obtaining
correct matches of features is a hard problem especially in the case of images acquired using different
sensors or widely different viewing positions. Therefore most methods that use feature matching
for such scenarios either use some specific domain knowledge or assume that the features are well
preserved under the two different views. On the other hand, while the methods using similarity
maximisation are relatively insensitive to variations in sensor characteristics, they can be computationally
expensive and typically need good initial guesses to ensure correct convergence. Therefore
these methods may converge to local minima in the case of images with large transformations between
them.
Recently "direct methods" [2, 11, 17] have had a lot of success. These methods assume that the
images that are to be aligned have a small transformation between them. A multi-scale approach
is used and a measure of image intensity similarity is minimised using optimisation routines. The
multi-scale approach ensures convergence of the optimisation method and the transformation model
is gradually made more complex (from simple translation, to Euclidean to affine) so as to best
describe the transformation between images. However, such methods have the limitation that they
cannot be guaranteed to converge in the cases where there is a large transformation between the
two images (eg. if the scale change is large). Also if we have images that have been taken from very
different view-points and with different radiometric properties (eg. a pair of visible and infra-red
images, or images taken under different lighting conditions), the direct method is not applicable
(In [10], the case of multisensor images is handled by a nonlinear filtering of the images which
emphasises the high frequency components. This enhances the discontinuities in images. However
such an aligment scheme can only be applied in a limited multisensor context). Other methods
that are correspondenceless are restrictive, they either solve for only translation [1], assume that
the image consists of a single, unfragmented curve [12, 15], assume that the images have textured
patterns [16, 3, 21] or cannot handle noise and/or occlusion in a robust fashion [6]. In this paper
we propose an alignment method that overcomes some of these limitations. Our interest is in
developing a method that is robust to the above mentioned variations and can handle large changes
in viewing positions as well as small ones.
(a) Contour C (b) Contour ~
Figure
1: The contour ~
C is a rotated version of contour C. The tangent lines at corresponding
points on the two curves are indicated.
While it is difficult to directly relate the intensity information across images, the geometric properties
of image discontinuities remain relatively stable, ie. the boundaries between regions do not
change under illumination changes etc. The geometric information contained in these stable image
contours is often sufficient to determine the transformation between images. This utilisation of the
geometric properties of image contours to estimate the relevant transformation between images is
the key idea developed in this paper. We assume that the scene is approximately planar and that
the relative geometry between images can be adequately captured by an affine transformation. Fur-
thermore, we assume that the images contain atleast one contour that can be extracted by standard
techniques. For our treatment of the alignment problem for discrete features like points and lines,
see [18].
To develop an intuitive understanding of our method, we will use a simple illustrative example.
Consider the contours shown in Fig. 1. The contour in (b) is a rotated version of the one in (a). Let
the rotation angle between the two contours be '. Aligning the contours is equivalent to estimating
the rotation angle.
Now consider a point p on contour C and its corresponding point on the second contour, ~
cated by the circles). The slope angles of the tangents to points p and ~
are denoted by \Psi and ~
respectively. We can observe that ~
'. Now this relationship holds for every corresponding
point pair on the contours, and hence the rotation angle can be easily recovered from such measurements
on given point pairs. However we do not need to establish point correspondences to extract
(a) Slope distribution of C
(b) Slope distribution of ~
Rotation angles
(c) Rotation estimate
Figure
2: (a) and (b) show the distributions of slope angles for the two contours. (c) shows the
cross-correlation between the distributions. The peak occurs at the correct rotation angle.
the rotation angle. Instead, we note that since the above relation holds for every point pair, it also
holds for the distribution of the measures on each image. In other words if we denote by P (:) the
distribution of slope angles of a contour, we have the simple relationship,
and hence given the two distributions of the slope angles in the two images, we can recover the rotation
angle ' in a simple fashion (This can be done by minimising the norm jjP ( ~
which is equivalent to maximising the cross-correlation between the two distributions. See Fig. 2.
Note that the probability distributions can be computed independently on each image without any
need to establish feature correspondences.
The above idea can be extended to other transformation parameters. Any transformation model
T that consists of n parameters can be reparametrised as
are
independent parameters. This parametrisation is chosen in such a way that each parameter g i is
computable from the observed images using the method we develop in this section. For example,
under a Euclidean transformation, the parameters used would be the rotation angle ' and translations
in x and y directions. Each of these new parameters can be estimated in a manner similar to
that described above for the case of the rotation angle. These notions are formally developed below
and the different transformation models are described in Section 3. In the discussions that follow,
we consider a point p on curve C in image I 1 and its corresponding point ~
on curve ~
C in image
I 2 . The local geometric properties measured at points p and ~
are called geometric descriptors and
are denoted by D and ~
D respectively. D and ~
are chosen in a manner such that for a pair of
corresponding points (p; ~
p),
where g is a parameter of the transformation T . In other words, the pair of operators (D; ~
D) are
chosen in such a manner that for a given corresponding pair of points, the value of ~
D can be related
to that of D given the value of the parameter g.
Intuitively, the above notation implies that for the correct value of the parameter g, the functions
D(:) and ~
D(:) have identical values when computed on corresponding points p and ~
p. This notion
is key to the estimation process since it enables us to choose the parameter that satisfies this
relationship. For an n parameter transformation T , we would need to establish n such relationships
using the different transformation parameters g i 's to estimate the transformation T . We denote
a parameter g i to be "observable" if its value can be recovered from the descriptor pair. In the
case of rotation,we can choose
\Psi. Hence ~
which implies that
is observable, since its value can be obtained from those of D and ~
D. Thus for an n parameter
transformation model, we can recover the transformation given n independent descriptors that make
the parameters observable.
However, since we do not have explicit correspondences available, we can convert observability
of parameters through descriptor pairs to observability through distributions of these descriptors.
Given a descriptor D, we can easily determine its probability density function P(D) in an image by
computing its value on points along the contours and computing the histograms of the computed
values. We denote the probability functions thus obtained from images I and ~
I as P(D) and P( ~
D)
respectively. Now for the correct value of g i , we have the relationship
Hence for
observation of the distribution under noise, we can estimate g i by maximising the similarity between
P(D) and P( ~
In practice, the observed distributions would not be identical to P( ~
to errors introduced by image noise, discretisation of the contours etc. Since the true distributions
of these errors are not analytically tractable, we will assume that the errors are well described by a
monotonically decreasing noise process with mode at zero, like Gaussian, Laplacian etc.
We can now describe the estimator for g i given the pdf's (ie. P(D) and P( ~
D)) as follows:
Theorem 1 Given an image pair I and ~
I, and descriptors D(p) and ~
D(~ p; g), the Maximum Likelihood
Estimator (MLE) for g is
arg min g
P( ~
where P and -
P are the observed probability measures of D and ~
D respectively and
is the L n
norm for n ? 0. The probability distribution function (pdf) of the observation noise is assumed to
be monotonically decreasing with a mode at zero.
For convenience, we shall henceforth denote g(T ) by g.
We denote the true probability distributions of D and ~
D as P and ~
The observed distribution of D is P and that of ~
D is modelled as :
P( ~
Now we know that for every possible distribution pair f P , ~
P g the relationship
P( ~
holds. For notational convenience, we denote the conditional probability measures ~
P( ~
Djg) and
P( ~
Djg) as ~
respectively.
Therefore, using the observation model for estimating
the MLE is defined as
where IP denotes the probability of the observations -
P. 2 In other words, we maximise the probability
of jointly observing the two probability distributions P and -
P on the images I and ~
I respectively.
For the given observation model, we get :
Due to the monotonicity of the noise model we observe that:
By the triangle inequality and Eqn. (2), we know that:
Therefore,
Pg ;Pg
Since the right hand side of equation (4) is equal to k ~
by the assumption of
independence of noise, we have
1 Note that here the true distributions P and ~
P are not random variables since for a given image pair, the
distribution of the geometric properties is deterministic, hence the distributions should be treated as general functions.
P are the observations we extract from the images.
arg min g
Hence the Maximum Likelihood Estimator for the parameter g is given by Eqn. (1).
In effect, the above theorem states that we can determine the optimal estimate of a transformation
parameter by maximising the similarity of the two distributions of the descriptor sets. So in the
case of a Gaussian noise model, our estimation process amounts to picking that parameter which
minimises the least squares error between the two distributions on the two images.
3 Transformation models
In the following subsections, we examine specific transformation models and show how we can
estimate the relevant parameters. We shall denote the points in the first image by
those in the second image by ~
. The transformation model adopted is
~
denotes the 2-D translation vector and the matrix T denotes a 2 \Theta 2 invertible
matrix. Henceforth we shall refer to matrix T as the transformation matrix.
3.1 Euclidean
The Euclidean transformation is parametrised by three parameters, the rotation angle ' and the
two translation parameters t x and t y . Here we have
~
To compute rotation we choose the descriptors D(p) and ~
D(~ p) to be the slope angles of points
on the curves in the two images. From the example shown in Fig. 1, the rotation value can be
computed using the MLE defined above. Having compensated for the rotation between the images,
we can compute the x-direction translation t x between the two images using
~
is the x-coordinate of point p. The y-component of translation can also
be computed in a similar fashion. Note that this estimation process is different from the traditional
method of taking the difference of the centers of mass of the two sets of contours. This is so because
our method of comparing the distributions is a robust estimator unlike the mean (See Section 4 for
an experimental comparision).
3.2 Similarity
To compute the similarity transformation, we need to compute an additional scale parameter s.
Since the radius of curvature (R) of a point is directly proportional to the scaling parameter, we
have p). From this we can deduce that
Hence
we have the simple additive relationship
Also since the radius of curvature is independent of the slope angle , we can compute the scaling
and rotation parameters independently.
3.3 Quasi-affine
We now consider the case of a "quasi-affine" transformation which is defined to be of the form
~
In this case, curvature and slope angles are no longer independent. However we can reparametrise
Eqn. (8) as
~
aeA@ cos ' sin '
sy
To compute the determinant of the transformation (jT we consider the following
The curves C and ~
C are parametrised by the indices s and ~ s respectively. The 2 \Theta 2 matrices P
and ~
defined as
p(s)] and ~
~
p), denote the derivatives
with respect to the appropriate parametrisation s or ~ s of the curves.
We have
x
~
x
From the above we can note that
~
ds
d~s
(See [5] for details). If the parametrisation is chosen such that ds
d~s
we have the simple relationship
~
Therefore we use
p]j and ~
~
to compute the determinant of the
transformation T. By scaling the curves we get the new relationship
~
aeA@ cos ' sin '
sin ' cos 'A -
The rotation angle ' can be computed in a manner invariant to the ratio of the scales ae 2 . To do
this we use the relationship
~
y
The two sides of Eqn. (11) can be equated to D(p) and ~
can be solved for.
The parameter ae, now satisfies the relationship
y
y
y
which implies that
x
y
y
y
Thus we can also solve for the parameter ae and hence we can recover the transformation matrix T
3 . Translation can be recovered as before.
3.4 Affine
Here we consider the more general case of an affine transformation between two images, described
by the equation
~
where
T =@ a b
c dA (15)
is a non-singular matrix. In this case, we need to compute four independent parameters of the
transformation matrix T , which can be accomplished in the following manner.
The determinant jT j can be computed in the manner described in the previous subsection. Now we
use the simple relationship
x
~
x
a -
y
3 Note that in Eqn. 12, ' is known since it is estimated using Eqn. 11.
By a simple reparametrisation of the parameters (a;
in the above
equation, we get
x
~
x
y
from which we can solve for the ratio ( a
using the left-hand side of Eqn. (16) for D(p) and the
right-hand side for ~
By similar analysis, we can recover the ratio ( c
d
using the relationship for - ~
y
y
. Finally, since we now
know the ratios ( a
d
), we can observe that
y
d
a
y
c
d
y
from which the ratio ( b
d
can be computed by taking the logarithm on both sides of the above
equation.
Now we have 4 unknowns and 4 different relationships between these parameters. We denote a
c
d
d
which when substituted into the relationship jT
Using this relationship we can solve for the value of a and then by using the other relationships, we
can derive the value of the matrix T . The sign ambiguity can be easily resolved by considering the
correctness of transformations applied to the image contours.
It may be noted that the reparametrisation of (a; b) and (c; d) results in a finite range of possible
estimate values of the new parameters (ie, range of OE is [\Gamma-]). It may also be noted that using
the above equations we could compute either of the ratios ie. a
or b
a
, or c
d
or d
c
(See Section 4 for
details on how this is used for robust estimation). The translational component can be recovered
in a manner identical to those in the previous cases. It is to be emphasised that in all of the above
cases we have chosen D and ~
D such that the computations on the two sides of the equality of a
relationship can be carried out independently on two different images. This choice enables us to
eliminate the need for correspondences.
4 Implementation and Evaluation
In this section we describe how our method is implemented in practice, following which we consider
various issues that arise out of the implementation and practice of our method. We also present an
empirical evaluation of its performance.
4.1 Implementation
In this subsection we detail the practical issues in the implementations of our method. All the results
detailed in Section 5 are obtained by applying the same fully automatic techniques. A Canny-like
edge detector is used to extract curves. Segments of curves that are smaller than 20 pixels are
discarded. Subsequently, the relevant differential properties are estimated along the curves in both
the images and the distributions computed to estimate the transformation parameters.
ffl Derivative Computation : An important issue in any such implementation is the accurate
computation of the differential properties which are in general sensitive to noise. As shown in [20],
using a Gaussian smoothing kernel to fit curves for computing the derivative of a discrete contour
is incorrect since such an estimator is very biased. To overcome this problem, we use a method
for robust computation of derivatives detailed in [14]. This method involves least squares fitting
of a continuous function (using polynomial orthogonal bases) to a neighbourhood centered on the
discrete point of interest. For such computations, closed form solutions exist in the form of convolution
kernels and the derivative can be easily computed by convolving the curve with these kernels.
We have found the estimates of derivatives using this method to be stable even for the cases where
noise was added to the curves.
ffl Discrete Distribution Representations : Since we use a finite number of bins to compute the
distributions of the geometric properties, we need to choose the bin size to be that of the accuracy
in estimation that we are interested in. For example, if we want to compute the angle OE which
represents the estimate in the affine case for values a
or c
d
to within an accuracy of 0:5 ffi , we need to
use at least 720 bins for the appropriate distribution. Also since the distributions are computed by
assigning each descriptor measure to an appropriate bin, the computational complexity is directly
related to the number of points on the curves for which the geometric descriptors are computed. In
fact if we have N points on the curves and we seek a resolution accuracy of \Delta, the computational
complexity is O( N
). However this computational load is substantially reduced by adopting a multi-resolution
technique. For example, in the case of the estimation of angle, we can estimate the angle
at a coarser resolution (say 2 ffi ) and then refine the estimate by computing the correct angle at a
finer resolution around the current estimate. This is important in the case where the parameter is
not separable (eg., a
for the affine case), in which case the estimation of the MLE amounts to a
search through the space of possible solutions. In our non-optimised implementation of the affine
parameter estimation (in MATLAB), the computation time was typically on the order of tens of
seconds on a Sparc Ultra 1. Faster implementations are easily conceivable. It may also be noted
that in the case where the parameter is separable, ie. when it can be expressed as a function of the
geometric descriptors alone (eg. rotation under the Euclidean model or scale under the Similarity
model), the estimation of the parameter is a straight forward maximisation of correlation which can
Two different parametrisations
Figure
3: The points shown on the two curves are the ones that would be the closest correspondences
for different parametrisation of the curves.
be achieved using convolution. Hence the estimation is considerably faster in such cases and the
computational time is of the order of seconds.
With regard to accuracy it may be noted that in the estimation of the fractions a
or c
d
we use the
tangent function which could lead to inaccurate solutions if the estimated angle OE is close to 90 ffi .
However in such cases, it may be noted that if
is close to 90 ffi , the ratio b
a
would be
close to 0 and can be estimated accurately without any instabilities. This is a strategy we adopt in
our implementation, where the angle tan \Gamma1 ( a
is estimated and if required we switch the representation
to the parametrisation that uses the ratio b
a
instead of a
. The same applies for the ratio of
c and d.
ffl Parametrisation of curves : The estimation accuracy is also dependent on the manner in
which the curves are parametrised. As noted in Section 3.3, if the curves are parametrised such
that ds
d~s
sampling the geometric properties at uniform intervals on the curve would be correct
since for every point we sample on a curve, we would ensure that we sample its corresponding point
on its transformed version. However for our method it is not necessary to parametrise the curves in
a manner that they satisfy the above criteria. Since we have a finite number of bins for representing
any distribution, points that have values within a fraction of the bin size would all fall within the
same bin.
To show that the above is true, consider the following. In Fig. 3, we show a curve with two different
parametrisations. We also indicate a series of points on the curves according to two independent
parametrisations, s and ~
s. Let the step size in both the cases be \Delta. Obviously, if s and ~
s satisfy the
relationship ds
d~s
then the sets of points would be in correspondence. And hence any measurement
of the geometric properties would be equivalent upto the parameter that we want to estimate. Now
consider the case when ds
d~s
1. Let the geometric property we measure be denoted F . Then the set
of geometric descriptor values measured in the first instance, (fFg) would be values computed at
points according to the parametrisation s and those in the second case (f ~
be values computed at points according to parametrisation ~ s which would be
measured on a different set of points than those according to parametrisation s. Now every point
in the first set would have a neighbour within distance \Delta in the second set. Therefore if we assume
that dF
ds
is bounded we have,
ds
where dH
F) is the Hausdorff distance [9] between sets F and ~
F. Thus,
lim
\Delta!0
\Delta!0
Therefore if we sample the curves densely enough we would ensure that for every point sampled in
the first image, we would sample a point close enough to its corresponding point in the second image.
This implies that we can effectively capture the statistical nature of the two distributions without
needing the correct parametrisation. We have found this to be true in course of our experimen-
tation. A simple arc-length parametrisation followed by uniform sampling was found to be sufficient.
Figure
4: The images on the left have 25% overlap. The correctly registered images are shown in
the right half of the image.
ffl Iterative Refinement : One of the underlying assumptions for our method is that the scenes
being imaged are the same, ie. the contours in the two images arise from the same area and hence
minimising the metric would result in the correct estimate for the transformation parameter. How-
ever, in the case where the overlapping areas of the image are small (ie. say or less), this
assumption cannot be true and hence the parameter estimates may not be correct. However by
applying the estimated transformation to the images, we can increase the percentage of the overlap
and hence better approximate the assumptions of a common scene. We have found that by extracting
the overlapping areas after applying this initial estimate and recomputing the transformation
using these areas we can get the correct transformation estimate. We illustrate this in the example
Fig. 4. The two images on the left were extracted from a larger image. The images are of size
300 \Theta 300 and the relative translation between the images is 150 pixels in both the x and y directions
(overlap is 25%). The initial estimate for the translation was (151,124) pixels. The estimated
translation was applied to the images and the overlapping areas of the two images were extracted
(The common area is now about 80%). Hence the new images are a better approximation of the
underlying assumption of a common scene. The required transformation is recomputed and the
new estimate is found to be correct, ie (150,150) pixels. The registered images are shown in the
right half of Fig. 4. We have found that in similar cases with little overlap, correct registration can
be achieved by applying one or two iterations of the above method.
In general, each parameter estimate assumes that the previous estimates are correct. Hence the
stagewise errors can accumulate. However at each stage, since the error is bounded, the maximum
possible error is also bounded. In practice, this problem has not been found to be significant.
Moreover, it can be easily corrected for by either increasing the resolution of our estimates at each
stage (by increasing the number of bins) or by recomputing the transformation using the idea of
iterative refinement as described above.
ffl Multiple peaks : In our method the estimation of any parameter amounts to the maximisation
of a given metric. Therefore we need to detect the largest peak of a given function. In the case
of the computation of rotation, we can see that there are multiple peaks (See Fig. 2) of which
the correct one is ideally the most prominent. However this uniqueness of the correct solution
may be violated in many practical scenarios, eg. when there is a lot of clutter, high amounts of
fragmentation or when there is a small amount of image overlap. In such cases, since the underlying
distributions no longer arise from curves that have a one-to-one correspondence across images, we
can get spurious peaks. A particular case is when we have rectangular buildings in aerial images.
Since buildings typically have strong edges that are oriented 90 ffi apart, we would get peaks that are
90 ffi away from the true solution. We tackle such situations by using a simple verification process to
eliminate spurious peaks. In the case of rotation, it is easy to see that any estimate can be verified
for its accuracy in registering the curves (edges) observed. Choosing the wrong peak would result in
totally incorrect alignment results. Thus in cases where there are multiple competing solutions for
a particular parameter, we maintain them as possible solutions and as we progress in our estimation
process, we eliminate the spurious solutions to arrive at a unique solution. In practice, we have
observed that there are at most 3 different estimates that we need to consider before choosing the
correct one.
4.2 Evaluation
In characterising the accuracy and robustness of any algorithm, typically additive white Gaussian
noise assumptions are made on the observed data. However in our method, it is not possible to
propagate such assumptions to arrive at appropriate noise models for the final representations used
in the estimation process. For example, even if we assume that the image noise is Gaussian, it is
extremely difficult and cumbersome to model the error in the resultant distribution of slope angle
or any other geometric property that we measure, since a number of processing steps are involved.
Moreover, a particularly strong assumption that many image alignment methods (eg. moment
based methods) make is that the curves are complete, ie. not fragmented. In practice this is seldom
the case and this fact has been one of the prime motivations for the development of our method
that uses a distributional representation to alleviate the problem of missing data due to curve frag-
mentation. Under such circumstances it is important to determine the robustness of the alignment
methods. However since it is not possible to accurately model the process of curve fragmentation,
we need to take recourse to an empirical evaluation.
To evaluate the accuracy and robustness of the proposed algorithm, we carried out the following
evaluations.
ffl Translation accuracy and comparision with mean estimation
ffl Rotation accuracy and comparision with moment based methods
ffl Robustness of affine estimation to noise
ffl Robustness of affine estimation to fragmentation
To standardise the evaluation, we used the same test pattern. This test pattern is shown in Fig. 5.
To test the accuracy of our translation method, we subjected the test pattern to a translation
of (10; 20) pixels to form the second curve. Subsequently both the test and translated patterns
were randomly fragmented to the extent that 10% (or 25%) of their total lengths was lost and
we estimated the translation using our method. To compare the accuracy of our method we also
computed the translation by the standard method of taking the difference of the centroids of the
two curves. This experiment was repeated a 1000 times and the results are tabulated in Table. 1 in
the form of the means and standard deviations of the two estimators. As can be easily noted, our
method is more accurate and more robust than the simple process of taking the mean.
10% fragmentation 25% fragmentation
Method Mean Std Mean Std
True Value (10.00,20.00) - (10.00,20.00) -
Our Method
Mean based (9.58,21.25) 2.58 (8.37,22.56) 7.86
Table
1: Comparision of translation estimation accuracy. 1000 experiments each. The true translation
is (10,20) and the table shows the mean estimates in position and the standard deviation of
our method and the mean-based method for two different fragmentation levels.
Method Mean error Std Dev of error
Our method 0:17
Moment based 1:24
Table
2: Comparision of rotation estimation accuracy. 1000 experiments were conducted with 25%
fragmentation
To compare the accuracy of rotation estimation, we used an ellipse and tested the recovery of its
rotation. An ellipse (( rotated by an angle chosen from the range [\Gamma-]. Both
the original ellipse and test ellipse were subjected to fragmentation of 25% of their total lengths.
We then recovered the rotation angle using our method. For the sake of comparision, we also used
a moment based method (by solving from rotation angle using the moment equations) to recover
the angle of rotation. This experiment was repeated 1000 times for different rotation angles and
the absolute error was computed for all the cases. Table. 2 shows the results for the two cases. The
mean and the standard deviation of the absolute error is shown. As can be noted, our method of
rotation estimation is more accurate than that of a moment based one, which like in the case of
translation is a non-robust estimator due to the integrative nature of the moment functions. In such
a case, once the moments are computed, we cannot "separate" out the contributions from spurious
curve segments etc. In this context we would like to point out that most traditional image alignment
techniques that use moments do so in a different manner. In such cases, moment invariants are used
for matching features and then algebraic equations are solved to calculate the required alignment
transformation ( [7]). This is quite different from using a moment based approach to compute the
transformation without using explicit matching. The few cases that do use the moments to calculate
the transformation (eg. [15]) make very strong assumptions that the scene has a single contour and
that there is no fragmentation. Such assumptions imply that there is explicit matching and do not
constitute a general correspondenceless technique.
Figure
5: The test pattern used for evaluation of the affine alignment accuracy.
(a)
level
Average
registration
error
(b)
Figure
The figure on the left shows an example of noise contamination. The noise is white
Gaussian with a standard deviation of 1 pixel. The graph on the right shows the average error for
different noise levels. Notice that the alignment is quite reasonable even for extremely high levels
of noise!
To evaluate the estimation accuracy of our affine parameter estimation method, we test for the
following cases.
ffl Estimation accuracy under noise
ffl Estimation accuracy under fragmentation
For the case of estimation under noise, we apply a known affine transformation to the test pattern
shown in Fig. 5. Subsequently we add white Gaussian noise of different standard deviations of 0:25
to every point on the test pattern. One such instance is shown in Fig. 6 a). To evaluate the accuracy
of the estimated transformation, we use the estimate to warp back the transformed contour onto
the original test pattern and we measure the root mean square error (RMSE) obtained between the
(a) (b)
Figure
7: The circle in the figure on the left is an outlier and does not appear in the second image.
The dotted curve in the figure on the right shows the registration achieved in the presence of the
contaminating outlier. As can be noticed the alignment is quite reasonable given the high levels of
contamination due to the outlier. This can be easily refined to get the correct estimate.
Figure
8: The fragmented, transformed version of the test pattern.
test pattern and the estimated aligned pattern. The average RMSE errors for different noise levels
are shown in Fig 6 b). As can be observed, the performance of our estimator degrades gracefully
and gives reliable estimates even under the severe amounts of noise shown in Fig. 6 a).
To study the effects of outliers on our estimation process, we consider the scenario in Fig. 7. The
large circle is not present in the second image, hence it will corrupt the distributions of different
geometric properties. In this case, the effect of the presence of the circle is severe since it contributes
large numbers of samples to the distributions. However as can be observed in Fig. 7, the estimate
of the transformation is reasonable considering the severe amounts of contamination. Given this
estimate, it is easy to reject the outliers and we can reestimate the transformation to get the correct
solution for the affine transformation between the two images.
To evaluate our method under fragmentation of contours, we used the same test pattern shown in
10% fragmentation 25% fragmentation
Measure (in pixels) (in pixels)
Median 0.45 0.97
Mean 0.58 1.11
Std. Dev 0.27 0.72
Table
3: RMSE for 1000 experiments with fragmentation
Fig. 5 and subjected it to the same affine transformation used in the evaluation for robustness to
noise. Then we fragmented both the contours and estimated the transformation. We considered
fragmentations of 10% and 25% of the total length and repeated the experiments a 1000 times in
each case (One instance of the fragmented pattern is shown in Fig. 8). The results are tabulated in
Table
. 3. As can be noted, our estimation process is fairly robust to even severe fragmentations of
the order of 25% of pixels in both contours.
In this section we present the results obtained by applying our method to a variety of images. The
same technique is used for all the examples. The implementation is detailed in Section 4.
Figure
Figure
An image and its affine transformed version
Figure
9: An image and its affine transformed version.
Fig. 9 shows a synthetic image and its affine transformed version. The resulting registration achieved
is shown in Fig. 10 and has sub-pixel accuracy (The root mean squared error (RMSE) is about 0.5
pixels). In Fig. 11 we show the registration achieved when about 25% of the image is occluded. As
can be observed from the result, the registration accuracy is not affected since the nature of the
Registered images
Figure
10: Alignment result of the images in Fig. 9. The transformed contours of the second image
are shown as dotted curves.
distributions are not altered to the extent that the MLE's are significantly perturbed.
Fig. 12 shows a mosaic constructed by aligning images from a video sequence in a common frame of
reference using a Euclidean model. The sequence was obtained by a hand-held camera and involved
translation and rotation. In this case a Euclidean model was sufficient for achieving accurate
alignment.
Fig. 13 shows the alignment of aerial images of the Mojave desert obtained from a balloon flying over
the area. The alignment achieved is accurate as is evident from the alignment of the image features
like the roads, rock outcrops etc. In Fig. 14 we show the results for aligning a pair of images from
the Landsat Thematic Mapper (TM). The images are from different bands of the electro-magnetic
spectrum and therefore have different radiometric properties. The images are correctly aligned as
can be observed from the continuity of the coastline and the alignment of features that run across
the two images. Fig. 15 shows the results obtained for another pair of Landsat TM images. We
used the quasi-affine transformation model to achieve the alignment for Figs. 13, 14, 15.
As an illustration of our algorithm for affine transformation estimation, we show two examples
from different application domains. In 16 we show the registration achieved for two MRI images of
different modalities. The image on the left in Fig. 16 is a proton density MRI image and the one in
the middle is the corresponding T2 weighted image that has been subjected to an arbitrary affine
transformation. The image on the right shows the alignment of a single contour from the different
modalities. It may be noted that the correct alignment is achieved inspite of the difference in the
photometric properties of the images. The alignment can be further refined if necessary using any
of the standard energy minimisation techniques.
Original Image
Occluded Transformed Image
Registered image
Figure
11: Alignment of occluded image. Only the transformed version of the occluded image is
shown. The registration achieved is to within subpixel accuracy.
Finally, we demonstrate the applicability of our method to alignment of multi-sensor images that
have large transformations between them. In Fig. 17, we show two satellite images of a river taken
under the SPOT and TM modalities. As can be observed, the image intensity patterns are quite
different and therefore any intensity based "direct method" is not directly applicable. Moreover,
the relative scaling between the images is large (about 1:4). Such large scalings are seldom dealt
with using optimisation schemes. Our alignment of the two images is shown in Fig. 18. We use a
checkerboard pattern to illustrate the alignment of the two images from different sensors. Minor
discrepancies do exist in the alignment (probably due to the deviation of the images from the affine
model), but the overall alignment is very good especially given the large scaling between the two
images. If desired, the final alignment can be easily refined using different techniques.
All the above examples illustrate the ability of our algorithm in achieving the correct alignment for a
variety of possible scenarios. We have also demonstrated that our method gives good performance
for a broad range of cases especially in the case where the transformations between images are
large and when there is a small amount of overlap between the images, something that energy
minimisation methods would fail to achieve.
6 Discussions
An advantage of our method is the fact that we use the same method for computing the alignment
for all cases. This is possible since we use the same distributional framework of local geometric
(a) Some frames (b) Mosaic
Figure
12: Mosaic created out of a video sequence. Some of the frames used are shown in (a).
properties to estimate the transformation parameters. As a result, the method works just as well
for images with a large transformation between them as with images with small transformations.
This is not always the case with energy minimisation methods which typically require the solution
to be started fairly close to the true solution to avoid being trapped in local minima. Also any
explicit need for domain dependent knowledge is avoided since the image primitives we use are
contours that are usually extractable using standard edge-detection techniques. We believe that
this is a significant advantage in the case of multi-sensor images since most methods that deal with
multi-sensor image alignment utilise specific knowledge about the imaging geometry and radiometry
to tune their algorithms to work on the specific domains of application.
The use of a distributional method has other significant advantages too. Fragmentation of contours
is easily handled since the local geometric properties can still be computed on the fragments without
any significant loss of information. This is possible since if we have a contour, say, of length 200
pixels, fragmented into many segments that add up to say 180 pixels in total length, then we still
have 180 points on which we can compute the geometric properties. The probability distribution
that we estimate is not significantly changed. Therefore, we can observe a graceful degradation of
the method with increasing loss of information due to occlusion or fragmentation. As demonstrated
in the evaluations, such robustness would not be possible with the use of traditional global techniques
like moments unless special care is taken to handle the change in shape due to occlusion or
fragmentation. It is easy to note that the results of a moment-based method would be perturbed
significantly due to small amounts of fragmentation of contours, while our method would still give
the correct estimates.
In most voting based schemes, there is a combinatorial explosion of possible solutions that need
to be checked as we increase the dimensionality of the search space or more significantly, as the
Figure
13: Alignment of a set of aerial images
number of observations are increased. This is so since most voting schemes, use all possible combinations
of "hypothesised" feature matches to populate the space of possible solutions and then
search for a maxima in this space. In our case, we have two advantages in this regard. Firstly, all
computations are carried out independently on each image and its only in the final stage that the
two distributions are compared. Secondly, each stage of parameter estimation is one-dimensional
since we parametrise each transformation into a set of independently estimated parameters. As a
result, the computational load is reduced. Also detection of peaks in one-dimensional functions is
easier than in higher dimensions especially under noisy scenarios.
As demonstrated by our examples, the theoretical framework we have developed works well for
many real-life examples. While the existence of clutter, occlusion or the fragmentation of image
contours does violate the underlying assumptions, our method is robust enough to be able to easily
handle these problems. However it would be advantageous to modify the metric that we minimise
so as to take into account knowledge about the scene. If we can determine which parts of the image
contain areas that are visible in both images, which parts are occluded etc. then we should modify
the metric to exploit this information to make our method more robust. One limitation of our
framework is that the scene is assumed to be approximately planar which may not be the case for
scenarios where the perspective effects in the images are dominant. However we can still use our
method to get a reasonable estimate of the true transformation.
Figure
14: Alignment of LANDSAT TM images from different bands.
Figure
15: Alignment of two LANDSAT TM images
Figure
The image on the left is a proton density image, the one in
the middle is T2 weighted and the image on the right shows the registration of a single contour of
the two modalities
7 Conclusion
We have described a framework for image alignment that does not use explicit feature correspon-
dences. We have demonstrated the effectiveness and explained the advantages of using a distributional
framework for computation of the parameters required for image alignment. This framework
is robust and is more general than many existing methods.
Acknowledgments
We would like to acknowledge E. Rignot , B. S. Manjunath and maintainers of the UCSB Image
Registration web site for providing some of the data sets used in this paper. We would also like to
thank R. Chellappa, Z. Duric and J. Oliensis for comments on the paper.
--R
"A Robust, Correspondenceless, Translation-Determining Algo- rithm"
"Hierarchical Model-Based Motion Estimation"
"Shape from Texture : Estimation, Isotropy and Moments"
"A Survey of Image Registration Techniques"
"Invariant Signatures for Planar Shape Recognition Under Partial Occlusion"
"Recovering 3D Rigid Motion Without Correspondence"
"A Moment-Based Approach to Registration of Images with Affine Geometric Distortion"
"Image Registration Without Explicit Point Correspondences"
Comparing Images Using the Hausdorff Distance"
"Robust Multi-Sensor Image Alignment"
"Recovery of Ego-motion Using Image Stabilisation"
"Recovery of Global Nonrigid Motion - A Model Based Approach without Point Correspondences"
"A Contour-Based Approach to Multisensor Image Regis- tration"
"Smooth Differentiation Filters For Images"
"Using moments to acquire the motion parameters of a deformable object without correspondences"
"Extracting the affine transformation from textured moments"
"Compact Representations of Videos through Dominant and Multiple Motion Estimation"
"Multisensor Image Registration Using Feature Consensus"
"Alignment by Maximization of Mutual Information"
"Noise Resistant Invariants of Curves"
"Recovering surface shape and orientation from texture"
--TR
--CTR
Rujirutana Srikanchana , Jianhua Xuan , Matthew T. Freedman , Charles C. Nguyen , Yue Wang, Non-Rigid Image Registration by Neural Computation, Journal of VLSI Signal Processing Systems, v.37 n.2-3, p.237-246, June-July 2004
Kenneth Nilsson , Josef Bigun, Localization of corresponding points in fingerprints by complex filtering, Pattern Recognition Letters, v.24 n.13, p.2135-2144, September
Christophe Doignon , Dominique Knittel, Detection of noncircularity and eccentricity of a rolling winder by artificial vision, EURASIP Journal on Applied Signal Processing, v.2002 n.1, p.714-727, January 2002
Bogdan Georgescu , Peter Meer, Point Matching under Large Image Deformations and Illumination Changes, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.6, p.674-688, June 2004 | geometric properties;statistical distributions;pose estimation;correspondenceless image alignment |
319267 | Line-Based Face Recognition under Varying Pose. | AbstractMuch research in human face recognition involves fronto-parallel face images, constrained rotations in and out of the plane, and operates under strict imaging conditions such as controlled illumination and limited facial expressions. Face recognition using multiple views in the viewing sphere is a more difficult task since face rotations out of the imaging plane can introduce occlusion of facial structures. In this paper, we propose a novel image-based face recognition algorithm that uses a set of random rectilinear line segments of 2D face image views as the underlying image representation, together with a nearest-neighbor classifier as the line matching scheme. The combination of 1D line segments exploits the inherent coherence in one or more 2D face image views in the viewing sphere. The algorithm achieves high generalization recognition rates for rotations both in and out of the plane, is robust to scaling, and is computationally efficient. Results show that the classification accuracy of the algorithm is superior compared with benchmark algorithms and is able to recognize test views in quasi-real-time. | Introduction
Automated face recognition (AFR) has attracted much interest over the past
few years. Such interest has been motivated by the growth in applications in
many areas including, face identification in law enforcement and forensics,
user authentication in building access or automatic transaction machines,
indexing of, and searching for, faces in video databases, intelligent user
interfaces, etc. AFR generally consists of different components namely, face
detection to determine the position and size of a human face in an image (see,
for example, Sung and Poggio [15]), face recognition to compare an input
face against models of faces that are stored in a database of known faces
and indicating if a match is found, and face verification for the purpose
of authentication and/or identification. In this paper we study the face
recognition and assume that the face location in the image is known.
Unfortunately, face recognition is difficult for a variety of reasons. Firstly,
different faces may appear very similar, thereby necessitating an exacting
discriminant task. Secondly, different views of the same face may appear
quite different due to imaging constraints, such as changes in illumination
and variability in facial expressions and, due to the presence of accessories
such as glasses, beards, etc. Finally, when the face undergoes rotations out
of the imaging plane, a large amount of detailed facial structure may be oc-
cluded. Therefore, in many implementations of face recognition algorithms,
images are taken in a constrained environment with controlled illumination,
minimal occlusions of facial structures, uncluttered background, and so on.
The most popular approaches in the face recognition literature are mainly
identified by their differences in the input representation. Two major input
representations are used, namely the geometric, feature-based approach and
the example- or image-based approach. The matching procedure of input
and model faces used in the majority of the geometric or image-based approaches
utilises fairly standard distance metrics like the Euclidean distance
and correlation.
The feature-based technique extracts and normalises a vector of geometric
descriptors of biometric facial components such as the eyebrow thickness,
nose anchor points, chin shape, zygomatic breadth etc. The vector is then
compared with, or matched against, the stored model face vectors. This ap-
proach, however, requires the solution of the correspondence problem, that
is, the facial vector components must refer to the facial features in the im-
age. Also, model face generation can be time consuming, particularly for
large face databases, and the complexity of geometrical descriptors can be
restrictive. Model generation and matching with non-fronto-parallel poses
and varying illumination are more complex and can only be achieved at the
expense of increased computation times (for example, Brunelli [3]). The
technique was pioneered by Kanade [8] and, more recently, by workers such
as Brunelli et al [4].
The motivation for the image-based approach is its inherent simplicity
compared with the feature-based approach owing to the fact that it does
not use any detailed biometric knowledge of the human face. Image-based
techniques include any variation in face appearance due to changes in pose
and lighting by simply storing many different 2-D views of the face. These
techniques use either the pixel-based bidimensional array representation of
the entire face image or a set of transformed (e.g., gradient filtered) images
or template sub-images of facial features as the image representation. An
image-based metric, such as correlation, is then used to match the resulting
image with the set of model images. Two popular methods are used in
the context of image-based face recognition techniques namely, template-based
and neural networks. In the template-based approach, the face is
represented as a set of templates of the major facial features which are
then matched with the prototypical model face templates (see, for example,
Baron [2]). Extensions to this technique include low dimensional coding to
simplify the template representation and improve the performance of the
template matching process (see, for example, the "eigenfaces" of Turk and
Pentland [16]) or wavelets, stochastic modeling with Hidden Markov Models
(HMMs) ([14]), and elastic face transforms to model the deformation of the
face under a rotation in depth ([19]). Neural network-based image techniques
use an input image representation that is the grey-level pixel-based image
or transformed image which is used as an input to one of a variety of neural
network architectures including, multi-layer, radial basis functions and auto-associative
networks (see, for example, Edelman et al [7]).
Although geometric or image-based approaches are conceptually well-suited
to face recognition, many of the techniques developed to date have
been demonstrated on small, simplistic face databases with strict imaging
constraints requiring, in many cases, large processing times for training
and/or recognition. In this paper we propose a line-based face recognition
technique under varying pose that is computationally-efficient, has good
recognition rates, handles face rotations both in, and out of, the imaging
plane and is robust to variations in scale. The image representation scheme
used is a set of random one-dimensional rectilinear line segments of the grey-level
face image and the matching scheme is an efficient nearest-neighbour
classifier. The scheme exploits the uniqueness of a given face image resulting
from the combination of different subsets of line segments of the image. We
present an overview of the performance of current face recognition systems
in Section 2. We then describe our face recognition algorithm in Section 3
and present the face recognition experiments with results and comparisons
with other benchmark algorithms in Sections 4 and 5.2, respectively. Finally,
we conclude and give future work in Section 6.
2 Performance of Current Face Recognition System
We review the comparative performance of some face recognition systems
that use the geometric or image-based approaches. A few authors report
comparisons of two or more systems. In a later section we will make a reference
to the performance of some of these systems when we provide results
for our algorithm (see Section 5.2). Achermann and Bunke [1] compare
the eigenface classifier, a classifier based on hidden Markov models (HMM)
and a profile classifier on a 30-person database with moderate pose variation
amongst the ten views per person. The eigenface classifier performed
best (94.7%), followed by the HMM classifier (90.0%) and the profile based
classifier (85.0%). Ranganath and Arun [12] compared radial basis functions
and a nearest-neighbour classifier, using (i) an eigenface-based and
(ii) a wavelet-based image representation They observed that the radial basis
function classifier performed better than the nearest-neighbour classifier,
and that the eigenface representation offered somewhat better discrimination
than did the wavelet based representation. Brunelli and Poggio [4]
compared a feature based approach with a template based approach on a
47-person database of frontal views. The template based technique achieved
100% correct classification while the feature based method achieved 90% but
was faster. Zhang et al [20] compared three face recognition methods: the
eigenface approach, a two-network based connectionist approach and a flexible
template approach. The two networks are an auto-associative network
for feature extraction and a classification network for recognition. With the
template approach, Gabor filters were used to pre-process the image and
elastic matching with an energy function was used for recognition. The tests
were performed on four individual databases and on the combined set of 113
persons. The eigenface classifier was found to perform well for the individual
data-bases where illumination conditions are constant, but performed
badly (66%) on the combined database because of different light conditions
amongst the databases. The flexible template approach performed well on
all data including the combined database (93%). The two neural networks
did not perform well. A recent survey on face recognition in general was
compiled by Chellappa et al [5]; Valentin et al [18] surveyed the use of connectionist
models in particular.
3 An Efficient Face Recognition Algorithm
Here, we briefly outline the face recognition algorithm which is based on a
more general object recognition algorithm given in [6]. We are interested in
classifying K faces, image views of
each unique face F k , obtained by regular sampling in the viewing sphere.
The aim is to recognize one of the K faces from one or more test image
views.
A face image = is modeled as a regular lattice of w \Theta h pixels, with each
pixel P having a depth equal planes. We first classify the pixels
into two classes, C p;1 and C p;2 . Class C p;1 consists of the background
pixels in the face image =, and class C p;2 consists of all those pixels that
represent a face in = such that C p;1 " C OE. We are interested in those
pixels in C p;2 with neighbours in C p;1 and call the set of those face boundary
pixels fi.
Consider l pixel values extracted along a straight line between two points
in the image, comprising of l bits of data. The number of line pixels (or
line dimensionality) is small enough for efficient classification but of course
may not capture the information necessary for correct classification. How-
ever, with some reduced probability (larger than random) the line predicts
the correct face class. The algorithm we propose is based on the observation
that the classification of many such lines from a face image = leads to an
overall probability of correct classification (PCC) which approaches 1. An
example set of face lines is shown in Figure 1.
Figure
1: Example set of random lines in a face view.
For any two points in an image view V k , such
that the Euclidean distance between B 1 and B 2 is greater than a minimum
be a vector of length l, where l is
the number of equi-spaced connected intensity values L
along the image rectilinear segment from B 1 to B 2 . The
line segment length l is a constant parameter determined a priori; larger
values of l result in better classification rates at the expense of increased
processing times. All lines are scaled to the value l by pixel interpolation.
We call lattice line, denoted by L. The exact endpoints of L
need not lie on a corner of the boundary pixels B 1 and B 2 .
For each face class F k in the training set of V k image views, we randomly
generate
lattice lines (N V k
lines per image view per face class),
i;k . There are
lattice lines for K face classes. The set of lattice lines for all K face classes
is given by:
We define the distance D(L r;s ; L m;n ) between two lattice lines L r;s and
L m;n as
l
and -(L r;s
l L r;s =l. The value of \Delta has the effect of shifting the two
lines towards the same average value, making the distance measure invariant
to illumination intensity.
Given an unseen test lattice line L j where generally L j 62 \Psi, we define
\Psi. The Nearest-Neighbour
Classifier (NNC) maps L j to the class F k to which L j; belongs.
That is, We write D j for D(L
We assume that there are N test lines L j , where
each line we have obtained a L j; and a D j . Let D
(for some value k 1 where g. We
define the variance for line L j , var
and the maximum variance, var g.
We define the measure of confidence that NNC(L j ) is correct, conf
(Dmax \GammaD min )
otherwise.
. The variables p 1 and p 2 control the shape of
the confidence function, whereas w 1 and w 2 are the weight magnitudes of
the distance and variance components, respectively.
We now state the face recognition algorithm.
The Line-Based Face Recognition Algorithm:
To classify a face F t for which we know its boundary pixel set fi, we randomly
select N lattice lines L j , . For each face class F
define
We assign F t
to class F g such that TC g is maximum. That is,
if
then F g / F t for F
Because F t is assigned to class F g based on the combination of many
assignments of individual lines, we may assess the likelihood that our decision
is correct by the agreement within the line assignments. Specifically,
we define the confidence measure factor as the ratio
where
j is the second largest compounded confidence measure that a
class obtained. As our decision is based on the maximum score, the associated
confidence CMF is proportional to the difference with the second
largest score. The denominator normalises CMF for different numbers of
testing lines.
It is a considerable advantage if a classifier were to supply a confidence
measure factor with its decision, as the user is then given information about
which assignments are more likely to be wrong so that extra caution can
be exercised in those cases. Our implementation makes use of CMF by
means of several decision stages. Firstly, the number of testing lines is kept
small, an initial decision is arrived at quickly, and the confidence measure
factor is evaluated. Secondly, if the confidence measure factor is smaller
than twice the minimum confidence measure factor threshold CMF min then
the number of testing lines is doubled and a second decision is made at the
cost of extra time. Finally, if the second confidence measure factor is smaller
than CMF min , the 'doubling process' is repeat one last time.
The above algorithm is surprisingly simple and, as we shall demonstrate,
is particularly efficient in both the recognition rate performance and computation
time. Moreover the algorithm has some inherent advantages. Firstly,
due to the randomised sampling of the image, the algorithm is robust to rotations
of the face in the plane. Secondly, we reason that multiple views are
even better suited to our 1D line-based algorithm and is better able to handle
head rotations out of the plane than 2D view-based algorithms. Thirdly,
since the lines run from one face-boundary to another and have fixed di-
mensionality, the algorithm is also scale-invariant. Fourthly, the choice of
distance measure ensures that it is tolerant to changes in illumination inten-
sity. Finally, because all lines are sampled from the entire head section of
the image, the algorithm is also robust to changes in facial expressions and
in the presence or absence of glasses or other accessories. Unfortunately,
the current algorithm is not robust to changes in illumination direction such
as found in outdoor settings or successful in cluttered scenes (such as in a
video sequence).
Face Databases and Experimental Methodology
In order to evaluate the performance of the algorithm we used two face
databases namely, the University of Bern (UB) [17] and the Olivetti & Oracle
Research Laboratory (ORL) [11] face databases. The UB face database
contains ten frontal face images for each of persons acquired under controlled
lighting conditions. The database is characterised by small changes in
facial expressions and intermediate changes (\Sigma30 degrees out of the plane)
in head pose, with two images for each of the poses right, left, up, down
and straight. The ORL face database consists of ten frontal face images for
each of 40 persons (4 females and 36 male subjects). There are intermediate
changes in facial expression and unstructured intermediate changes (\Sigma20
degrees) in head pose. Some persons wear glasses in some images and the
images were taken under different lighting conditions. In our experiments
we combined the two databases to form one larger one containing 700 images
of 70 persons. The data set was used to assess the classification rate of our
algorithm by cross-validation, using five images per person for training and
the other five for testing.
5 Experimental Results
Prior to evaluating the overall performance of the algorithm for the combined
face database, various parameters were optimised. These included
the number of training lines (N k ), the number of test lines (N ), the line
dimensionality (l), and the decision confidence measure factor (CMF ).
5.1 Evaluation of Parameters
In order to investigate the effect of various parameter settings on classification
time and correctness, we ran four experiments, varying one parameter
at a time. In each experiment, the parameters were resampled over the combined
face database. For example, for evaluating the probability of correct
classification (PCC) for different numbers of test lines, each set of test lines
was obtained by resampling the combined face database.
The results are presented in Figures 2 to 6. The vertical axis shows the
classification accuracy (probability of correct classification, PCC) as a per-
centage, the horizontal axis represents the computation time in seconds, per
view. Each figure shows two lines, the upper line indicates the percentage of
correctly classified persons based on the majority of the view classifications,
the lower line shows the percentage of correctly classified individual views.
Each point on both lines corresponds to one of the parameter settings (see
the figure caption for their values). Each graph also shows the standard
deviations obtained for the 10 repetitions undertaken for each experiment.
For each repetition, the training and test lines were resampled and the PCC
recorded.
In the first experiment, the number of test lines N was varied from 20 to
400 (17 values in total). The number of training lines, N k , was set to 200, the
line dimensionality was set to l = 32 and the minimum confidence factor was
set to zero. Figure 2 shows the results. As expected, both the computation
time and the classification accuracy increase almost monotonically with the
number of test lines. The increase in time is approximately linear while the
accuracy first increases rapidly and then levels out. A distinctive 'knee' in
the curve of the percentage of correctly classified persons occurs at about
test lines with a classification accuracy of approximately 95%. This
corresponds to a ratio in the number of test pixels to image size equal to
0.18, equivalent an effective dimensionality reduction by a factor equal to
5.5.
Figure
2: PCC versus time for 17 values for the number of testing lines
20, 30, ., 110, 120, 150, 200, ., 400. Here, the number of training lines N k
200, the line dimensionality l = 32, and the confidence threshold CMF min
In the second experiment, the number of training lines was varied from
50 to 400 (9 values in total). The number of testing lines was fixed at 150,
the line dimensionality was set to 32 and the minimum confidence parameter
was set to 0. Figure 3 shows the results.
Figure
3: PCC versus time for 9 values for the number of training lines N k
Again, as expected, both the time and the classification accuracy increase
with increased number of training lines. However, the shapes of the curves
are flatter than those for the test lines (Figure 2), i.e. there is no distinctive
'knee' where the curve flattens out. This is as expected, as increasing the
number of training lines increases the PCC of a test line, while increasing
the number of test lines only decreases the variance in the procedure that
forms a decision from the classifications of all test lines. Once that variance
is reduced significantly, the inaccuracy due to a finite number of training
views dominates and the curve flattens out.
In the third experiment, the minimum confidence measure factor value
was varied from 0.0 to 0.4 (8 values in total). The number of training
lines was set to 300, the initial number of testing lines set to 50, and the
line dimensionality set to 32. The results are shown in Figures 4. The
experiment was repeated with the number of initial testing lines set to 100
(see
Figure
5). Both Figure 4 and 5 look very similar, with the larger values
of the minimum confidence factor resulting in larger computation times.
However, an improved accuracy for the case when the initial number of test
lines is doubled is observed. On the other hand, both figures converge to a
similar classification accuracy. For example, for a time equal to 0.7 seconds,
both cases achieve near 100% accuracy for the persons and greater than 90%
accuracy for the views. This means we can decrease the number of initial
testing lines without ill-effect so long we increase the minimum confidence
measure factor accordingly, and vice versa.
Figure
4: PCC versus time for 8 values of the minimum confidence measure
2.
Figure
5: PCC versus time for 7 values of the minimum confidence measure
and
In the last experiment, the line dimensionality l was varied from 8 to
values in total). The number of training lines was set to 300, the
initial number of testing lines was set to 100 and the minimum confidence
value was set to 0.1. The results are shown in Figure 6. The classification
accuracy first increases rapidly, then stabilizes, with the change occurring
between 24. The timings first decrease then increase. This is
because the confidence value is not set to zero, and small dimensionalities
triggered frequent increases in the number of test lines. However, it is perhaps
surprising that, for l = 16, the classification rate for persons is already
near 100%.
Figure
values of the line dimensionality l = 8, 12,
The results indicate that, despite very low PCC of individual lines (we
observed values around 0.1), combining a large number of such classification
results in am overall PCC which is much higher. In fact, the graphs indicate
that as the number of training lines goes to infinity, the PCC approaches
1.0.
In the following section we present results for the combined face database
using optimal parameter values. The number of training and test lines per
face view were chosen to be equal to 200 and 80, respectively, a dimensionality
of confidence measure factor equal to 0.4 were
selected.
5.2 Face and View Recognition Performance Results
We ran two sets of experiments using the optimal parameter values. In the
first set we selected the training set by inspection in order to provide a good
cover of the varying head positions and facial expressions. In the second set
of experiments we randomly selected the image views, repeating the process
three times. We expect to obtain an inferior recognition rate for the random
sampling of the training set compared with the selected sampling. Most real-world
applications would allow selecting good training views as with our first
approach. However, in some applications such as video sequences, random
sampling is more realistic. We also evaluated the algorithm for both face
recognition and view recognition. That is, in the former case, the algorithm
is presented with all the test image views for a given face whereas, in the
latter case, the algorithm is presented with just a single test view (as would
be found in, for example, an access control situation). Again, we expect
lower recognition performance results for the view recognition as compared
with the face recognition.
Table
1 shows the results obtained for both the regular (selected) and
random set of training and test views and, for both face and view recognition,
for the combined face database. Also shown are the maximum and minimum
recognition rates obtained in all experiments.
Training Classification Accuracy (%) Training Time Test Time
Procedure Face Recognition View Recognition per view (sec) per view (sec)
Random 98.0 80.8 0.8 max 0.79 max
Selected 100.0 88.6 0.8 max 0.69 max
Selected 100.0 99.8 1.6 max 5.0 max
Table
1: Face and view recognition rates for random and selected sampling
of training poses for the combined face-database.
With selective (non-random) view cover, we found that 100% of persons
are correctly classified if approximately 0.7 seconds or more are spent per
view to form the decision. Further experiments have shown that we can
reduce the test times at the expense of a reduced recognition rate - approximately
85% of persons were correctly classified if a maximum of 0.1 seconds
were allowed. On the other hand, a significant improvement in the view
recognition rate can be achieved if a higher test time is allowed - nearly
100% of all views are correctly classified if up to 5 seconds are used for test-
ing. For random sampling we observe slightly reduced recognition rates and
fractionally longer test times (per view). The results confirm that a relatively
small number of views is sufficient for good view-based classification
performance if the test views are sufficiently covered by the training views.
Figures
7 and 8 show examples of a face view instance that was mis-classified
by the algorithm in one of the experiments. The second and third
rows of faces are training examples of two persons (one person for the second
row and one for the third row). The first row shows the test image view
corresponding to the person in the third row that was misclassified as the
person in the second row. As can be seen, it is not always straightforward
to discriminate between the two persons in some poses.
Figure
7: Example misclassification of a test view for 5 training views of
two persons from the ORL database (see text for explanation).
Figure
8: Example misclassification of a test view for 5 training views of
two persons from the UB database (see text for explanation).
We also tested our method on the individual databases. Our method
achieved 100% correct recognition of views on the ORL database using an
average of 3.9 seconds per view for testing, and 100% recognition of views on
the Bern database using on average 1.5 seconds per view. More computation
time is spent classifying views from the ORL database because it contains
many more views which are difficult to recognise. The parameter settings
for these results were: 500 training lines, 120 initial testing lines, the line
dimensionality was 24 and the minimum confidence factor was set to 0.5. For
comparison, we include some benchmark results obtained by other workers
on the same face databases (see Section 2). Samaria [13] used the HMM
implementation, Zhang et al [20] implemented both the eigenface and elastic
matching algorithms, Lin et al [10] use intensity and edge information with
a neural network, Lawrence et al [9] tested local image sampling with two
neural networks as well as the eigenface classifier, and Achermann et al [1]
used a combination of eigenface, HMM and profile classifiers. Their results
are summarised in Table 2. We include results for the line-based algorithm
in the last row of the table.
Authors Classifier Method ORL Bern
Zhang et al [20] Eigenface 80.0 87.0
Zhang et al [20] Elastic Matching 80.0 93.0
Lin et al [10] Neural network 96.0 -
Lawrence et al [9] Neural network 96.2 -
Lawrence et al [9] Eigenface 89.5 -
Achermann et al [1] HMM - 90.0
Achermann et al [1] Eigenface - 94.7
Achermann et al [1] Combination - 99.7
Aeberhard et al Line segments 100.0 100.0
Table
2: Comparative recognition rates for ORL and Bern face databases.
As can be observed, our results on the combined databases are better than
any of the benchmarks on the individual databases - even when compared
with combined classifiers. Furthermore, none of the benchmarks give results
for the execution times (which, we suspect, are larger than our results for
comparable classification accuracies). Our test time results are quite adequate
for real-time applications such as security access and video sequence
tracking.
6 Conclusions
We have described a computationally-efficient view-based face recognition
algorithm using line segments of face images. The algorithm is robust to
rotations in, and out of, the plane, robust to variations in scale, and is robust
to changes in illumination intensity and to changes in facial expressions.
Experiments have demonstrated that the algorithm is superior compared
with available benchmark algorithms and is able to recognise test views in
quasi real-time.
The main drawback of our technique lies in the assumption that the face
detection has been undertaken prior to the application of the line-based algorithm
and that the face boundaries are available. If the boundaries are
largely occluded or indistinguishable from the background then the performance
of the current algorithm will be reduced. We are currently investigating
modifications to the algorithm that will account for the absence of
face boundaries.
--R
Combination of face classifiers for person identification.
Mechanisms of human facial recognition.
Estimation of pose and illuminant direction for face pro- cessing
Face recognition: Features versus templates.
Human and maching recognition of faces: A survey.
Learning to recognize faces from examples.
Picture processing by computer complex and recognition of human faces.
Face recognition: A convolutional neural network approach.
Face recognition/detection by probabilistic decision-based neural network
Face recognition using transform features and neural networks.
Face Recognition Using Hidden Markov Models.
Parametrisation of a stochastic model for human face identification.
Eigenfaces for recognition.
University of Bern
Connectionist models of face processing: A survey.
Face recognition: Eigenface
--TR | face recognition;line-based algorithm;real-time performance;varying pose;classification accuracy |
319335 | Transparent distributed processing for rendering. | Rendering, in particular the computation of global illumination, uses computationally very demanding algorithms. As a consequence many researchers have looked into speeding up the computation by distributing it over a number of computational units. However, in almost all cases did they completely redesign the relevant algorithms in order to achieve high efficiency for the particular distributed or parallel environment. At the same time global illumination algorithms have gotten more and more sophisticated and complex. Often several basic algorithms are combined in multi-pass arrangements to achieve the desired lighting effects. As a result, it is becoming increasingly difficult to analyze and adapt the algorithms for optimal parallel execution at the lower levels. Furthermore, these bottom-up approaches destroy the basic design of an algorithm by polluting it with distribution logic and thus easily make it unmaintainable. In this paper we present a top-down approach for designing distributed applications based on their existing object-oriented decomposition. Distribution logic, in our case based on the CORBA middleware standard, is introduced transparently to the existing application logic. The design approach is demonstrated using several examples of multi-pass global illumination computation and ray-tracing. The results show that a good speedup can usually be obtained even with minimal intervention into existing applications. | INTRODUCTION
Usually, distributed algorithms differ considerably from their non-distributed
versions. In order to achieve optimal performance, care-
Am Weichselgarten 9, 91058 Erlangen, Germany
Email: kipfer@informatik.uni-erlangen.de
y Gates Building 364-3B, Stanford, CA, 94306, USA
ful attention has be paid to issues such as load-balancing, communication
patterns, and data and task management. These issues can
easily dominate the core application logic of distributed algorithms.
This is in particular true for systems that allow for the flexible combination
of different distributed algorithms at run-time.
The loss of application logic in a sea of complex distribution issues
is a severe and growing problem for reasons, such as increased
application complexity, increased maintenance cost, or simply educational
purposes. In particular maintenance and portability to different
hardware architectures has always been a major issue with
distributed applications. Also, for development and debugging pur-
poses, it is often desirable to run the same code in a non-distributed
serial fashion. This is often impossible with code designed for distributed
applications where distribution logic is deeply embedded
into the code.
Finally, probably the most important reason for keeping distribution
issues transparent to the application programmer is the need
to add distributed computation to an existing application. Here we
need to add new features with as little impact on the existing application
as possible.
Creating a transparent distribution infrastructure avoids many
options for optimization and thus will very likely offer inferior performance
than distribution code that is deeply integrated with the
application. Thus, our work partly relies on the fact, that the increased
availability of cheap but high-performance computers allows
us to trade non-optimal efficiency for simpler, cleaner, and
more maintainable application code, of course within limits.
The object-oriented design of an application is the main starting
point for achieving transparent distribution. The basic idea of
object-orientation, the encapsulation of data and algorithms in units
that communicate via messages, carries over nicely to distributed
systems where objects now live in separate address spaces. All that
needs to be changed, is the way these objects communicate with
each other, so they do not need to be aware of the fact that a peer
object may actually be located on a different computational unit.
Object-oriented middleware like CORBA [OMG98a] already
provides much of the required distribution infrastructure, such as
location, communication, and network transparency. However,
from a programmers perspective, CORBA is still highly visible
due to CORBA-specific types in interface definitions and the requirements
that distributed objects and their proxies derive from
CORBA-specific classes. Furthermore, interfaces that work well
with colocated objects can result in high communication costs if
these objects get separated across a network. This raises the need
to transparently adapt the interfaces for objects that may be distributed
In the remainder of this paper we present several design patterns
for hiding the distribution infrastructure in distributed object-oriented
systems. These patterns emerged from our work on
speeding-up an existing large system for rendering and global illumination
[SS95] by distributing it across a network of computers.
For educational purposes, we required the distribution infrastructure
to be highly invisible to the normal programmer. For practical
reasons we could not afford to redesign the whole system around
some intrusive distribution framework.
Thus, we concentrated on encapsulating distributed and non-distributed
modules, and on providing interface adaptors that take
care of distribution issues. The result is a system with a highly configurable
distribution infrastructure that is mostly invisible to the
programmer and the user, but still achieves good parallel perfor-
mance. Although we concentrate on distributed processing across
a network of computers in this paper, the same design patterns are
also being used for parallel execution of modules within the same
address space on computers with multiple CPUs (see Section 4).
1.1 Previous Work
There have been a large number of papers on parallelization and
distribution of rendering and lighting simulation algorithms. Good
surveys are available in [RCJ98, CR98, Cro98]. Most of the papers
concentrate on low-level distribution for achieving high performance
(e.g. using such tools as PVM [GBD + 94] or MPI [GLS94]).
One of the few exceptions is the paper by Heirich and Arvo [HA97]
describing an object-oriented approach based on the Actor model.
Although this system provides for location and communication
transparency, the distribution infrastructure is still highly visible to
the programmer.
Several object-oriented frameworks for supporting parallel or
distributed programming have been suggested (e.g. POET [MA] or
[Jez93]). POET is a C++ toolkit that separates the algorithms
from the details of distributed computing. User code is written as
callbacks that operate on data. This data is distributed transparently
and user code is called on the particular nodes on which the data is
available. Although POET as well as all other frameworks abstracts
from the underlying message passing details, it requires to adapt the
algorithms to the given structure of the framework and is thus not
transparent to the programmer.
Other approaches view local resources only as a part of a
possibly world-wide, distributed system ("computational grids",
"world-wide virtual computer"), for instance Globus [FK97] or Legion
[GLFK98]. While these are certainly a vital contribution to
distributed computing, the demands on the code are significant and
by no means transparent to the programmer, which is the main goal
of our efforts.
DISTRIBUTION
In the following we present an integrated approach to parallelization
and distribution of application modules. It is based on the fact,
that object-oriented systems should be and usually are composed of
several quite independent subsystems. In contrast to addressing parallelization
at the level of individual objects, larger subsystems of
objects usually offer a better suited granularity for distributing computation
across computers. These subsystems are often accessed
through the interface of a single object using the "facade" design
pattern [GHJV95].
In an application based on this common design approach,
these few facade classes can easily be mapped to CORBA interfaces
[OMG97], providing the basis for distributing the applica-
tion. However, this initial step does not solve our problem, as the
CORBA-specific code would be introduced at the heart of our application
and we do not want the details of distribution to be visible
to a developer. Ideally developers should be able to concentrate
on their problem instead of being unnecessarily forced to consider
distribution-specific issues, like network latencies, CORBA-types,
request-bundling for optimized transport, marshaling and object se-
rialization, mapping of class creation requests to factory methods,
and the handling of communicating threads for asynchronous operations
template
access
implementation
traditional
serializer
proxy object
packer
back front
Figure
1: Wrapping existing implementations promotes code reuse
by enabling traditional classes to communicate with the distributed
system through the services provided by the wrapper. Because these
services emulate the traditional interfaces to the contained class,
and with the help of templates, this requires almost no manual coding
We have chosen to build a new distribution interface that completely
hides the CORBA distribution infrastructure from the appli-
cation. This new interface provides the illusion of traditional, non-distributed
classes to the outside, while internally implementing optimized
distributed object invocations. It is based on asynchronous
communication with a multi-threaded request-callback scheme to
enable a maximum of parallelism. Additionally, the framework
performs load balancing and bundling of requests to avoid network
latencies. These are the key concepts that allow us to optimally
make use CORBA and its current synchronous method invocation
paradigm (the new CORBA Messaging specification [OMG98b]
add asynchronous method invocation, but is only now becoming
available).
For encapsulating existing interfaces, our framework provides
base classes that provide management services for object creation,
communication transport control and synchronization and many
other services (see below). Our wrapper for the subsystems that
contain the rendering and illumination algorithms use and inherit
from these base classes.
For example, our main management class, which controls the
overall execution of the rendering task, must be able to define certain
synchronization points to ensure that all distributed objects
have the same view on the whole system. This occurs for example
when waiting for all distributed rendering objects to finish their
setup and scene parsing routines before invoking rendering com-
mands. Additionally, this management classes provide host machine
information, a scripting engine for configuring the distribution
of objects, resource locking, and access facades for the managed
subsystem while hiding the use of CORBA completely. In the
next three subsections, we address the basic patterns used to implement
this approach.
2.1 Wrapping for Distribution
In order to actually reuse the existing object implementations within
a distributed environment, our distribution framework provides
wrappers for entire subsystems. A wrapper actually consists of two
half-wrappers that encapsulate the subsystem as a CORBA client
(calling) and as a server (called). We assume that a subsystem is
represented by at least one abstract C++ facade class, that defines
the interface of the subsystem. We also assume that the subsystem
communicates with the outside through interfaces defined by
similar facade classes.
We replicate each of these interfaces in CORBA IDL using struc-
LightingComputerBase
self
delegate getIllumination()
serialize
delegate
IllumRep
External Polymorphism
Adapter
new
connect
IllumRepConverter
new
Figure
2: Specific method calls can be forwarded to the implementation
in a pseudo-polymorphic way, while general functions like
serialization of request packets are inherited from template base
classes which in turn implement the abstract interface declaration
(see also Figure 6).
tures to pack relevant object data that needs to be transferred (the
object by value extension of CORBA has not been available until
very recently). Most often we also define new methods that allow
for the bundling of multiple requests on the calling side. We
then implement the server side by forwarding the requests to the
wrapped facade object in a pseudo-polymorphic way [CS98], serializing
any bundled messages that arrive, and managing asynchronous
calls (see Figure 1).
For the client role of a wrapped subsystem, we need to instantiate
C++ classes that derive from a distributed C++ proxy template.
They translate the calls from the old C++ interface to calls, that
use the CORBA object references. This layer is also responsible
for bundling individual calls and using new asynchronous interface
methods for bundled requests within the CORBA interface.
Although this wrapping seems complicated and does require
some small amount manual coding, most of the work can be delegated
to generalized template abstract base classes (see Figure 2).
When viewed from the outside, the encapsulated subsystem looks
just like a distributed CORBA object using the equivalent CORBA
IDL interface. To the contained object, the wrapper looks exactly
like any other part of the traditional system using the old C++ interfaces
The biggest benefit of using this kind of wrappers is the possibility
of reusing existing code. While this does not take advantage
of parallelization within a subsystem, it enables the distribution and
parallelization of different subsystems. This can be of great value,
in particular when multiple memory-intensive algorithms have to
be separated across multiple machines. The interfaces, provided by
the wrappers, finally allow wrapped traditional objects to transparently
cooperate with other distributed objects as they are introduced
in Section 2.3.
2.2 Replication and Request-Multiplexing
In order for old code to use distributed subsystems, we need an
additional wrapper. Its interface is derived from the old C++ facade
interface, but it translates the messages to corresponding calls
to distributed CORBA objects, e.g. those from Section 2.1. As
mentioned before, this translation has several aspects. For one, it
translates between traditional and CORBA types where object data
needs to be copied into IDL structures. Second, small individual requests
may be accumulated and sent across the network in bundles,
thus avoiding network traffic overhead.
In addition, we take the opportunity of the wrapper to perform
multiplexing and re-packeting of requests across a pool of functionally
identical CORBA servers. This enables us to distribute
Multiplexer 2
host N
host 0
processor N
Multiplexer 1
processor 0
Direct Lighting
Request
Thread
Ray-Tracer
Manager
Ray-Tracer
asynchronous
communication
Thread
Request
Manager
Figure
3: Multiplexers distribute requests equally to functionally
equivalent objects either distributed across a network (data-parallel
ray-tracers) or running in different threads (colocated lighting ob-
jects). Note that the multiplexers do not contain the computation
classes, rather they supply the requests and manage the transport
of the responses. The embedded request managers use a re-
quest/callback model and a thread pool to achieve asynchronous
communication.
the computational load evenly using load balancing performed by
the wrapper. However, because of the current synchronous nature
of CORBA method calls, multiplexing needs to use the request-
callback scheme [SV96] provided by our base classes.
Load balancing is performed by sending requests to the server
with the lowest load. To this end, the servers maintain FIFOs of requests
to balance network latencies. The fill-level of those FIFOs is
communicated back to the wrappers piggy-packed on data returned
in the callbacks.
Using this scheme, the multiplexed classes look to the outside
like a single, more powerful instance of the same subsystem. The
benefit of this approach is that by using wrappers and multiplexers,
existing code can fairly easily be wrapped, replicated, and thereby
sped up. While multiplexers fan out requests, the wrappers in Section
2.1 automatically combine and concentrate asynchronous requests
from multiple clients. Note that both patterns perfectly meet
our goal of distribution transparency and do not alter the application
logic of the remaining system at all.
The following pseudo-code shows how a multiplexer for lighting
computations inherits the interface of the lighting base class and
overloads the computation request method by implementing some
scheduling strategy (see also Figure 6).
IDL:
interface LightOp {
void computeIlluminations(in sequence req);
interface Multiplexer : LightOp {
void addLightOp(in LightOp op);
C++:
class Multiplexer : public IDLmultiplexerInterface {
virtual void addLightOp(LightOp op)
{ lightOpList_.push_back(op); }
virtual void computeIlluminations(Request req[]) {
int idx= determineBestServer()
lightOpList_[idx]->computeIlluminations(req);
protected:
vector lightOpList_;
2.3 Transparent Services
Some subsystems are computational bottlenecks and promise to offer
substantial speed-up when they are completely re-implemented
Request
Thread
Manager
Photon Map
Particle Tracer
distributed lighting class
kd-Tree
Figure
4: Distribution and parallelization services provide support
for implementing advanced computation algorithms.
to take advantage of distribution. Our framework provides distribution
and parallelization services within the wrapper classes that
go beyond plain data transportation and interface adaption, such
as thread-pool handling, mutexes, factories for one-to-many and
many-to-one operating threads and their synchronization, runtime
system state and type information.
This pattern is the most powerful form of creating a new computation
object for the distributed system. It does however require
knowledge about the design and behavior of the distribution ser-
vices. Because the wrapper classes provide the CORBA interface
to the other traditional subsystems of the framework, a distributed
or parallel implementation of a subsystem can easily access them
directly.
A good example is a class that performs distributed lighting computation
using the PhotonMap algorithms [Jen96] (see Figure 4
shows our implementation). We reuse existing code for tracing of
photons from the light sources and for reconstructing illumination
information. Both reused object implementations are wrapped with
the patterns described above. Because the algorithm is aware of its
distributed or parallel nature, it can steer and adapt to the computational
requirements, e.g. by adding new particle tracer threads on
a multi-processor machine or adding new instances of distributed
objects. This scheme allows the programmer to gradually make selected
subsystems aware of the distribution infrastructure without
compromising the remaining system on the way.
The possibility of reusing existing classes simplifies the creation
of new distributed subsystems in a straightforward building-block
manner. However, a drawback of this approach is the dedication to
distributed computing, making the new subsystem more difficult to
use when running the application in a serial, single-threaded fashion
2.4 Discussion
The patterns introduced above offer several benefits:
ffl New developments within the traditional framework are immediately
distributable through the wrapper pattern, which
offers speedup through replication and multiplexing.
ffl There is no need for developers of algorithms to bother with
distribution and parallelization issues because the distribution
framework does not alter or interfere with the application
logic.
ffl The distribution and parallelization services offered by the
framework provide the developer of advanced computation
classes with basic functionality that is guaranteed to conform
to the overall design.
Multi-
Lighting Gradients
Combine
Irradiance
Direct
Multiplexer
Map
Photon
Combine
Figure
5: Logical data flow within an example distributed lighting
network performing direct, indirect, and caustic illumination
through different LightOps, some of which are replicated and use a
multiplexer for speed-up.
ffl The learning effort for beginners can be reduced dramatically
by a transparent distribution infrastructure - in particular if
compared to other distribution frameworks and the large number
of new software concepts introduced by them.
ffl Our distribution framework transparently supports modularization
and helps to structure the framework into toolkits with
well defined interfaces. This can help to reduce the overall
programming effort, and promotes a better understanding of
the big picture.
For each of the above pattern, there is a typical case of applica-
tion. Like a modular object-oriented program can be viewed at various
levels of granularity, the patterns support this building-block
design strategy. Because the distribution infrastructure uses consistent
interfaces, the patterns can be combined with each other or
be applied to traditional class implementations by a configuration
script. Especially for research and development purposes, this offers
a tremendous flexibility. Note, that the multiplexer can be used
to easily handle a new parallel implementation of a computation
class, which in turn can be constructed using wrappers, other distributed
classes, or multiplexers.
3 IMPLEMENTATION
The Vision rendering architecture [SS95] is an object-oriented system
for physically-based realistic image synthesis. The Lighting
Network within the Vision frame-work
provides an object-oriented way of dealing with functional
decomposition for lighting calculations. It implements the lighting
subsystem for Vision by decomposing the global illumination
computations into a set of lighting operators that each perform a
partial lighting simulation. Conceptually, these "LightOps" take
a representation of the light distribution in the environment as input
and generate a new representation as output. By connecting
these LightOps in the right way, the lighting simulation can be configured
flexibly by simulating any light-paths in a multi-pass fashion
[CRMT91].
The Lighting Network acts as a data flow network much in the
spirit of AVS [UFK + 89] or similar systems. Figure 5 shows a
example of a very simple distributed Lighting Network. It uses
two basic LightOps to perform direct lighting, adds their individual
contributions, and then performs indirect lighting computa-
tions. The result is the sum of the direct and the indirect illumination
(also see Figure 8). Direct illumination from light sources
is obtained through ray-tracing, the PhotonMap algorithm [Jen96]
computes caustic light paths, and indirect illumination is computed
with the irradiance gradients algorithm [WH92]. The whole lighting
network is managed by a special object called MultiLighting
that implements the lighting subsystem interface towards other Vi-
Implementations
CORBA
IDL
Interfaces
Skeleton
MultiLighting
Skeleton
Skeleton
LightingComputer
LightingComputerBase
LightOpBase
Skeleton
MultiLighting
Figure
Multiple layers of abstract interface declarations are complemented
by C++ definitions, to give consistent interfaces to all
components of the lighting subsystem.
sion subsystems and behaving according to the facade design pattern
[GHJV95].
The Renderer subsystem of the Vision framework encapsulates
various screen sampling techniques. It computes intersections with
visible objects of the scene and queries the lighting subsystem for
the incident illumination at that point. This illustrates the clear
separation of independent computation within the Vision rendering
framework.
We have applied the presented distribution framework to the
Rendering and Lighting Network subsystem in Vision in that we
allow individual Renderer and LightOp objects to be distributed
across a network or to be run in parallel through the use of threads.
Figure
6 shows the inheritance relations between the interfaces of
the LightOps and the MultiLighting facade. The asynchronous
communication patterns and services are implemented within the
base classes. Note that for wrapping traditional code, the C++
class on the lower left is a pseudo-polymorphic wrapper template 1 ,
which requires no manual coding.
Figure
7 shows a running distributed Vision system. Note that
hosts 1 and 2 contain multiple concurrent LightOps within a lighting
network. They should therefore have multiple processors to
enable functional parallelism.
The basic operating system functions are accessed via the
portable operating system adaption layer interface of the ACE library
[Sch94]. The communication and remote object creation
is done using the CORBA implementation VisiBroker of Inprise
To facilitate further development and maintenance,
the design of the base classes follows the guidelines of several design
patterns [GHJV95, CS98, LS96, SHP97, McK95].
1 The external polymophism pattern [CS98] allows treating non-polymorphic
classes as if they have the proper inheritance relationship, by
providing all classes with a method that simply delegates the calls to a sufficiently
global template signature adapter (that's why it's called external)
who in turn calls the method that performs the task.
This section demonstrates the flexibility of the presented distribution
and parallelization framework as applied to the Vision rendering
system. Several distributed LightOps have been implemented
using the design patterns from Section 2 and we discuss some
of their typical configurations. In order to reuse the traditional
implementations efficiently, several multiplexer classes
are available along with different scheduling strategies. This allows
building distributed lighting networks, that functionally distribute
lighting calculations. The configuration of the distributed
objects is usually specified in a TCL configuration file using the
existing scripting engine of the traditional Vision system, avoiding
the introduction of a second tier of abstraction for configuring the
distributed system (compare [Phi99]).
4.1 Efficiency of Asynchronous Communication
In the first example, we show the benefits of the asynchronous communication
pattern used throughout the CORBA implementation of
the base classes at the heart of the distribution infrastructure. Table
1 compares the packeted data transfer within a small lighting
network using asynchronous requests with an equivalent network
using the original interface with fine granularity. Both cases use
wrapped traditional LightOps and the same host configuration:
SGI Onyx Onyx O2
processors
R10k @ MHz 196 195 195
Renderer \Theta
Lighting Irr. Grad. Direct Combine
The main reason for the speedup of 33% is the low number of
method calls to transfer requests over the 100 MBit/s
network in the case of asynchronous communication, compared to
synchronous invocations in the second case. Both networks
transfer identical 22.7 MB of request data through CORBA
marshaling. It is the synchronous protocol of CORBA that blocks
the client until the server has completed the method call which is responsible
for the poor performance in the second case. This shows
clearly the important fact, that latency can be almost entirely hidden
using the asynchronous protocol provided by our distribution base
classes.
4.2 Distributed Rendering
To optimize rendering times in the case of calculating previews or
testing new computation class implementations, we pick up the example
from Section 2.2 (see Figure 3). The following configuration
of a distributed Vision system shows the best achievable speedup
we have found using our framework. It uses 4 hosts with a total
of 8 processors. There are 8 ray-tracers to work in data-parallel
mode and 6 lighting modules. Each group is controlled by a multi-
plexer. The distribution framework ensures that all communication
between the two multiplexers is done asynchronously.
SGI Onyx Onyx O2 O2
processors
R10k @ MHz 196 195 195 195
Renderer
Lighting
The lighting hosts execute a traditional implementation of a Irradiance
Gradients [WH92] LightOp, which is wrapped for distribu-
tion. Additionally, the wrappers on the multiprocessing machines
also include a multiplexer that executes the incoming requests in
parallel using a thread pool. Because there are multiple threads per
Light
HostManager
Light
Vision
Op
Light
Op
Light Op
Light Op
Host 5
Host 4
Host 3
Host 2
Host 1 Host 0
Master
Renderer
Slave
Vision
Slave
Renderer
Slave
HostManager
HostManager
Vision
Vision
HostManager
Renderer
Lighting
Op
Vision
HostManager
HostManager
NetManager
Master
Renderer
Figure
7: Example of a running distributed Vision system. The master renderer controls the data-parallel activity of the slave renderers on
hosts 3, 4 and 5. The MultiLighting on host 0 is the facade of the lighting subsystem, which is a lighting network residing on hosts 1 and 2
(it can also be further distributed as shown in later examples). Its entry point is the MasterLightOp, which controls the other LightOps. Note
that this functional parallelization also communicates asynchronously in a pipeline fashion (indicated by the solid arrows), enabling parallel
execution if a host has multiple processors. A single NetManager and a HostManager on each host are responsible for bootstrapping the
system onto the network by providing initial object factory methods (dashed arrows).
CPU, the multiplexer synchronizes them in order not to overload
the machine. Configuring this system, required just to name the
hosts and the Lightop with it's parameters in a configuration file.
The TCL scripts for system setup take care of distributing the objects
using the Net- and HostManager of Figure 7. This distributed
system is compared to the traditional Vision system with a single
thread of control, running on the fastest machine in a single address
space and calculating lighting with the very same LightOp implementation
As
Table
2 shows, the speedup obtained is near the theoretical
maximum of 12.5%. The overhead of 90 seconds consists of
seconds session setup, 5 seconds of additional parsing on the
CORBA startup client and another 5 seconds delay for allowing
the hosts to clean up the CORBA objects before the main CORBA
startup client shuts down all Vision instances. After subtracting this
overhead, we obtain a penalty of 13% during the rendering phase
for the distributed system. We believe this is a very good result,
given such a general and unintrusive distribution infrastructure.
4.3 Distributing Complex Lighting Computations
The functional decomposition of a lighting network offers the
biggest potential for distribution and parallelization, at the risk of
high communication costs. As shown in Section 4.1, the asynchronous
request-callback communication paradigm is able to provide
a partial solution for that problem. In the following example
we make heavy use of the patterns from Sections 2.3 and 2.1. This
configuration uses 3 hosts with a total of 7 processors:
Lighting
Onyx 196 4 Photon Map,
Direct, Combine
Onyx 195 2 \Theta Photon Map,
Irrad. Grad.
Octane
In this setup, the reconstruction method of the Photon Map
LightOp takes much more time to process a request, than any of
the other LightOps in the lighting network. Consequently, a multiplexer
is used to distribute this LightOp onto 3 hosts. In contrast,
the three other LightOps are executed on multi-processor machines,
because their reconstruction method is fast and the communication
between them can be optimized, if the CORBA implementation
supports object collocation. In order to drive this complex lighting
subsystem, two hosts execute rendering objects controlled by a
multiplexer in a data-parallel way.
As one can see from Table 3, the speedup obtained by this setup
is not as good as in the first example. But even the advantage of
the non-distributed version of running in a single address space,
does not outweigh the communication overhead of the distributed
system. Our profiling shows that the performance difference to the
theoretical maximum of 14.3% is mainly due to process idle times.
This occurs for example, if the calculation of one upstream LightOp
is sufficiently delayed. Since the underlying Lighting Network is
entirely pull-driven, the pipeline is blocked. We try to cope with
that problem to some extent by allowing the asynchronous interface
to drive three parallel streams at a time. Additionally, the resource
handling within the base classes allows running the rendering computation
concurrently with a lighting computation, resulting in a
kind of interleaving CPU-usage scheme, if the lighting pipeline on
the host is stall.
This example shows that there are cases where the full transparency
of the distribution infrastructure cannot hide inherent limitations
due to coarse grained communication patterns of existing
subsystems. Note however, that this behavior is mostly a problem
of the non-distribution aware algorithms of the lighting network
and not so much a general drawback of the distribution frame-
work. However, even with the very limited success, we still get
some speed-up without any change to the application logic.
Apart form that, one has also to take into account, that while a
traditional system performs quite well in this case in terms of execution
speed, it is severely limited by the host's memory resources.
Especially the PhotonMap LightOp needs to store many photons
that have been shot into the scene when working with large scene
descriptions. The distributed PhotonMap LightOps in this example
have the memory of three hosts to their disposition. Further-
more, the initial shooting of particles is done in parallel, reducing
the Lighting setup time needed to one seventh (there are 7 processors
on the three hosts), which is of great value when simulating
high quality caustics.
wallclock asynchronous wrapped-only
seconds for LightOps LightOps
async
host 2
host
host
async
async
MultiLighting
Session Renderer
Combine
distributed
Direct
distributed
distributed
Irradiance Grad.async
sync sync
host 2
host 1
host
MultiLighting
Renderer
Combine
Direct
Gradients
IrradianceParsing Scene 5.80 5.67
Lighting Setup 1.56 1.68
Renderer Setup 0.30 0.34
Render Frame 1,922.06 2,916.95
Total 1,977.36 2,974.92
Table
1: Packeted asynchronous data transfer within a lighting network compared to LightOps using CORBA's synchronous request invocation
wallclock distributed traditional
seconds for System Vision
host
host
Ray-Tracer
host
host
host
Ray-Tracer
host
Ray-Tracer
Ray-Tracer Lighting
Gradients
Irradiance
Lighting
Gradients
IrradianceGradients
Irradiance
Lighting
host
Ray-Tracer
Parsing Scene 5.61 -
Lighting Setup 0.14 -
Renderer Setup 0.36 -
Render Frame 317.03 2,359.20
Total 387.41 2,380.15
Table
2: A distributed system using two multiplexers, controlling the data-parallel renderers and the lighting objects on the left side, are
compared to the traditional single-threaded system.
Although there certainly is a price to pay for the flexibility of our
distribution strategy, we obtain high degrees of freedom in configuring
the distributed system and adapting it to the challenges of a
specific lighting network.
We presented a general approach on providing a transparent infrastructure
for distributing object-oriented systems and used this infrastructure
for the distribution and parallelization of rendering and
lighting computations. While we created several design patterns to
hide CORBA and the distribution infrastructure from the average
system programmer, our system provides distribution services to
the advanced application programmer and still offers access to all
basic distribution classes for sophisticated tuning if necessary.
The use of the CORBA middleware allowed us to abstract from
much of the underlying communication infrastructure. Contrary to
popular believe, the runtime overhead of using CORBA has been
minimal. However, the synchronous nature of CORBA messages
was a major problem that we had to work around using a non-trivial
request-callback scheme based on multi-threading. Here, the addition
of asynchronous messaging to CORBA should help tremendously
The implementation of distribution functionality within a few
base classes makes distribution issues totally transparent to an application
programmer. We demonstrated the approach with examples
for the Vision rendering framework, to which it provides transparent
data-parallelism and distribution of the existing object struc-
ture. Developers of new computation classes are free to use the
distribution infrastructure to add distribution aware modules or to
wrap existing algorithms and distribute them across a network of
computers.
The distribution infrastructure has proven to be practical and sta-
ble. It offers well-defined interfaces without imposing any limitations
on the remaining parts of the Vision system. Distributed lighting
networks simply can be constructed and configured by scripts
that specify the location and parameterization of specific modules
in a network. Figure 8 gives an impression, of how the flexible
structure allows the configuration of the whole distributed system
for different purposes, ranging from speeding up preview renderings
to experimenting with complex lighting networks consisting
of many different distributed lighting simulation algorithms.
Future work on the distribution infrastructure will concentrate
on recovering some of the efficiency that we lost in the process. In
particular, it would be useful if the system would take care of the
distribution of modules across a network autometically and perform
better load-balancing. However, due to the dynamic nature of our
application, this requires some knowledge about the computational
characteristics of the different modules. Making this available at
the wrapping level or during run-time would allow us to statically
allocate and maybe dynamically move modules across a network.
6
ACKNOWLEDGEMENTS
We would like to thank Thomas Peuker who developed the initial
idea of this distribution framework. Also Marc Stamminger provided
considerable support for integrating the new scheme into the
existing Vision system. We would like to thank the anonymous reviewers
who helped to improve the initial version of this document.
--R
Parallel and distributed photo-realistic rendering
A progressive multi-pass method for global illumination
"Parallel Graphics and Visualization Technology"
External Polymorphism - An Object Structural Pattern for Transparently Extending C++ Concrete Data Types
A metacomputing infrastructure toolkit.
PVM: Parallel Virtual Machine
John Vlis- sides
Architectural support for extensibility and autonomy in wide-area distributed object systems
Using MPI - Portable Parallel Programming with the Message-Passing Interface
Parallel rendering with an actor model.
Global illumination using photon maps.
EPEE: an Eiffel environment to program distributed memory parallel computers.
Active Ob- ject: an Object Behavioral Pattern for Concurrent Program- ming
POET: A parallel object-oriented environment and toolkit for enabling high-performance scientific computing
Parallel hierarchical radiosity for complex building interiors.
Selecting locking primitives for parallel programs.
Object Management Group.
Object Management Group.
Messaging, omg tc document orbos/98-05-05 edition
Universit des Sciences et Technologies de Lille.
Overview of parallel photo-realistic graphics
The ADAPTIVE Communication Envi- ronment: An Object-Oriented Network Programming Toolkit for Developing Communication Software
Vision: An architecture for global illumination calculations.
Lighting networks - A new approach for designing lighting algorithms
Distributed callbacks and decoupled communication in CORBA.
The Application Visualization System: A computational environment for scientific visualization.
Irradiance gradients.
--TR
A progressive multi-pass method for global illumination
Using MPI
PVM: Parallel virtual machine
Active object
Global illumination using photon maps
Vision - An Architecture for Global Illumination Calculations
The Application Visualization System
Composite Lighting Simulations with Lighting Networks
Architectural Support for Extensibility and Autonomy in Wide-Area Distributed Object Systems
--CTR
Nathan A. Carr , Jesse D. Hall , John C. Hart, The ray engine, Proceedings of the ACM SIGGRAPH/EUROGRAPHICS conference on Graphics hardware, September 01-02, 2002, Saarbrucken, Germany | distributed processing;lighting networks;design pattern;parallel processing;object-oriented design;global illumination |
319350 | Specificational functions. | Mathematics supplies us with various operators for creating functions from relations, sets, known functions, and so on. Function inversion is a simple example. These operations are useful in specifying programs. However, many of them have strong constraints on their arguments to ensure that the result is indeed a function. For example, only functions that are bijective may be inverted. This is a serious impediment to their use in specifications, because at best it limits the specifier's expressive power, and at worst it imposes strong proof obligations on the programmer. We propose to loosen the definition of functions so that the constraints on operations such as inversion can be greatly relaxed. The specificational functions that emerge generalize traditional functions in that their application to some arguments may yield no good outcome, while for other arguments their application may yield any of several outcomes unpredictably. While these functions are not in general algorithmic, they can serve as specifications of traditional functions as embodied in programming languages. The idea of specificational functions is not new, but accommodating them in all their generality without falling foul of a myriad of anomalies has proved elusive. We investigate the technical problems that have hindered their use, and propose solutions. In particular, we develop a formal axiomatization for reasoning about specificational functions, and we prove its consistency by constructing a model. | INTRODUCTION
The square function on the integers, defined by sqr -
z), is not traditionally
regarded as having a well-defined inverse, because it is neither injective
nor surjective. Suppose, however, we were to broaden our definition of functions
so that the inverse of sqr, call it sqrt, is indeed a function. We might define sqrt
thus:
Permission to make digital/hard copy of all or part of this material without fee is granted
provided that the copies are not made or distributed for profit or commercial advantage, the
ACM copyright/server notice, the title of the publication, and its date appear, and notice is given
that copying is by permission of the Association for Computing Machinery, Inc. (ACM). To copy
otherwise, to republish, to post on servers, or to redistribute to lists requires prior specific
permission and/or a fee.
M. Morris and A. Bunkenburg
We have used above an instance of what we can call a prescriptive expression.
This has the general form (2x:T j P ) and the intuitive meaning some x of type T
satisfying predicate P . If there is no such x, we regard (2x:T jP ) as being equivalent
to the special expression ?, pronounced "bottom". If there are many such x, then
we have no information about which outcome is actually produced. For example,
sqrt 4 may yield 2 or \Gamma2, and we don't know or care which. Indeed, we cannot even
determine its behavior by experiment, because if sqrt 4 yields 2 in the morning, it
may well yield \Gamma2 in the afternoon. Both sqrt 7 and sqrt(\Gamma4) yield ?.
Here is another example: Suppose the type PhoneBook is comprised of all relations
from type Name to type PhoneNumber. Then the following function looks up
a person's phone number in a phone book:
lookUp -
Again, lookUp is not necessarily a function in the traditional sense (because some
people are not listed in phone books, and some have several entries) but we would
like to treat it as such.
For a more elaborate example, consider the function
which takes as arguments a function f and a set s, and selects some element a of
s such that f a is minimized. For example, instantiating T with Z,
yields either 1 or \Gamma1. We can define leastWRT for any type T thus:
To illustrate its use, we make a function to be used by two lovers each of whom
travels from city to city as a computer consultant. The function should yield a city
to which they should both travel if they want to be together as soon as possible. We
assume a type City whose elements are all cities, and a function time:City \Theta City!N
which yields the least traveling time (in minutes, say) between any two cities. The
function is
-his; hers:City ffl leastWRT (-c:City ffl time(his; c) max time(hers; c)) City:
In need not be a so-called "flat" type such as the integers,
but can be a more structured type such as a function type. For example,
specifies the familiar factorial function. For a more elaborate example of the
usefulness of choice over function types, let us specify a program for playing a
one-player game such as Rubik's Cube. We assume a context which provides a set
Cube of all the legitimate states of the game, a set LegalMoves which is a subset
of Cube $ Cube describing the set of legal moves, and a value goal 2 Cube which
describes the goal of the game. A player will be modeled as a function f :Cube!Cube
is the position to which the player moves when in position b:
Players -
Specificational Functions \Delta 3
We eliminate players who get stuck:
GoodPlayers -
Each player has a cost, which is the total number of moves taken to play all possible
games:
cost -
The program we want is
GamePlayer -
cost GoodPlayers:
All of sqrt, lookUp, leastWRT, and GameP layer are examples of functions more
liberally defined; we call them specificational functions.
Although specificational functions are not in general computational, they do play
an important role in specifying computational functions. For example, sqrt might
be presented to a programmer as a specification of a program he should implement,
by which is meant that he should produce a computational function SQRT whose
behavior is consistent with that of sqrt. By "consistent" we mean that when SQRT
is applied to a perfect square x it yields a square root of x (either negative or non-
negative), and otherwise it behaves in any way that the programmer fancies. For
example, the programmer might well design SQRT such that its graph is
We say that SQRT is a refinement of sqrt and write sqrt v SQRT. Roughly
holds of expressions E and F if in all contexts the possible
outcomes of E is a superset of the possible outcomes of F , with the slight twist
that ? is refined by all terms (which is the mathematical way of saying that if
the customer asks for the impossible, he had better be willing to accept whatever
he is given!). Readers not familiar with refinement calculi may feel, with good
cause, that w would be a more appropriate symbol for refinement, but v is what is
traditionally used (it may help to think of v as suggesting increasing information-
content). For example, in the context of applying SQRT to 4, the outcome 2
is among the outcomes of applying sqrt to 4 (2 and \Gamma2), and in the context of
applying SQRT to \Gamma2, the outcome 49 is acceptable because sqrt applied to \Gamma2
yields ?.
A refinement calculus is, in essence, a language with a transitive refinement relation
v, where some of the terms in the language are computational, and the
remainder are available for describing desired behavior. The process of making a
program consists in producing a sequence of language terms, t0 v t1 v ::: v tn,
where t0 is the customer's specification, tn is computational, and each term in the
sequence (except the first) is constructed incrementally from its predecessor. We
say that each term in the sequence is a refinement of its predecessors. Programming
by stepwise refinement [Wirth 1973] is an example of such a process, albeit
a somewhat informal one because the terms of the language are written partly in
informal pseudo-code. The motivation for studying specificational functions is a
desire to use them as part of a refinement calculus that supports the development
of functions, whether in functional or imperative programming.
Unlike stepwise refinement, we have in mind a fully formal mathematical system
in which each refinement is formally provable by deduction from given axioms.
M. Morris and A. Bunkenburg
ff (-x:T ffl E) j (-y:T ffl E[xny]), fresh y
E) j (-x:T ffl F
extensionality
Skolem
Fig. 1. Axiomatic properties of functions
This requires an axiomatization of logic, an axiomatization of base types that the
functions operate on, and an axiomatization of functions themselves. Traditional
functions are typically axiomatized by postulating properties such as those in figure
replaced by F
(subject to the usual caveat of renaming to avoid variable capture), and similarly
for predicates. In j and extensionality, E and F are of functional type T!U . (The
listed properties are not independent of one another, so no one axiomatization
would use all of them).
If we employ such axioms on specificational functions, we fall foul of a myriad of
anomalies, as we shall see shortly. The effect has been to inhibit their deployment
seriously. For example, it is common to restrict choice to flat types only, which rules
out, for example, the specification GamePlayer above. We investigate the technical
problems and propose solutions. In particular, we develop a formal axiomatization
for reasoning about them and show that it sees off the anomalies.
1.1 Outline of rest of paper
The next section will introduce the concepts and notations for equivalence, re-
finement, choice, and "proper" values. The third section explains in detail the
anomalies that occur when functions and choice meet, and suggests ways of avoiding
them. The fourth section axiomatizes these language constructs, and pairs.
The fifth section is the core of the paper. It presents an axiomatization of specificational
functions that avoids the discussed anomalies. The sixth section discusses
logic, and argues for our prefered logic. The seventh section gives a denotational
model of the calculus. Finally, we draw conclusions and review related literature.
1.2 Contributions
-Detailed exposition of six anomalies that occur when functions and choice are
combined.
-Suggestions how these anomalies can be avoided.
-An axiomatization of specificational functions.
-A denotational model.
2. MATHEMATICAL PRELIMINARIES
2.1 Equivalence, choice, and refinement
We presume the availability of a (strong) equality operator j on terms, reserving the
for the weak or computational equality operator found in programming
languages. (Weak equality is further explained below.) We will use "equivalence"
Specificational Functions \Delta 5
as a synonym for strong equality. Equivalence is reflexive, symmetric, transitive,
and a congruence: if replace F in any term without changing
its meaning.
The choice inherent in (2x:T j P ) may be unbounded, as in (2x:Z j true) which
may yield any integer. On the other hand, there may be no x satisfying P , as in
(2x:T jfalse); we introduce the special term ? T as an abbreviation for (2x:T jfalse).
It may seem intuitively reasonable that ? v F should hold for no term F , but
refinement calculi commonly depart from intuition by postulating ? v F for all
terms F . We adopt the latter approach (known as excluding miracles). The calculus
it gives rise to is arguably easier to apply in practice, although the underlying theory
is slightly more complex. This choice is not central to the concerns of this paper,
and nothing of substance in what follows depends on it. Note that (2x:T j true)
differs from ? T in that ? T is refined even by a "non-terminating" expression such
as an application of the recursive function f where f -
x. There is a
bottom for each type, indicated by subscripting, but we nearly always omit the
type, either because it is not significant in the context, or it can be easily inferred.
In refinement calculi, partial operations such as 3=0 are commonly equated with
?, and similarly for nonterminating expressions. It is also customary to use ? as
a "don't care'' term by which the customer indicates that she has no interest in
the outcomes. Although it may be useful in other contexts to distinguish these
various roles for ?, in program derivation they are similar in that they represent
error situations in which the outcome is unpredictable and unconstrained.
When there is precisely one x of type T satisfying predicate P - call it k -
then equivalent to k. One consequence of this is that specificational
functions include traditional total functions. Any finite choice can be expressed
in terms of a binary choice E u F which specifies a choice among terms E and F
(we use the words "term" and "expression" interchangeably). For example, 2 u 3
specifies 2 or 3, without preference; it can be written equivalently as (2x:Z j x j
It should be evident that u is commutative, associative, and idempotent.
It is a standard postulate in refinement calculi that refinement and equivalence are
related by E). It follows that v is reflexive, transitive and
antisymmetric (with respect to j). It also follows that ? is a zero of u.
Readers coming from a background in formal logic might be tempted to view
rough equivalent of Hilbert's ffl operator [Leisenring 1969], but
that would be a mistake. There is little connection between the two. Roughly
speaking, ffl represents choices that have already been made for you and for everyone
now and for all time, whereas 2 represents choices that have yet to be made and
which may be made differently on different occasions. A Hilbert choice from 2 and
3 is always 2 or always 3 - we just don't know which one it is. On the other hand,
2 u 3 is sometimes 2 and sometimes 3. In formal terms, the ffl operator satisfies
the axiom words "if P is true of some x of type
T , then it is true of (fflx:T j P )". This assertion does not hold when ffl is replaced
with 2. For example, letting J abbreviate (fflx:Z j x we have the
truth of J but with K standing for (2x:Z j x
false. The ffl operator is not useful for program refinement, because
it requires that any "looseness" in a specification be resolved by programmers in the
same way at all times and in all places. For example, if a customer asked that some
6 \Delta J. M. Morris and A. Bunkenburg
error message be displayed if a file was unavailable, without stating any preference
as to the wording of the message, it would require every programmer to deliver
exactly the same message. More amusingly, if all choice was Hilbert's choice, then
the diners in a restaurant would each choose the same meal from the menu.
We postulate that the operators of the base types (like on the integers)
are strict (i.e., ? is a zero for them) and distribute over choice. This design decision
properly reflects the fact that our brand of choice allows different resolutions on
different occasions. For example, although (2 u would be equivalent
to 0 according to Hilbert's brand of choice, we admit the possibility that the first
occurrence of (2u3) has outcome 2, while the second has outcome 3, and vice versa.
Hence behaves strictly and distributively,
just
2.2 Propers
The instantiation rule for universal quantification asserts that from the truth of
any term E. But such instantiation rules can
easily lead to inconsistencies in the presence of bottom and choice. For example,
is a theorem of arithmetic, and so we might be tempted to infer that
in contradiction of our earlier conclusion that
similar anomaly arises if we instantiate with ? - we can infer
There are two ways out of this dilemma: Either we modify the standard
laws of arithmetic to take into account bottom and choice, or we modify the rules
of instantiation. The easiest fix is to modify the instantiation rules by forbidding
instantiations with ? or terms involving (an unresolvable) choice; we call such terms
improper and all other terms proper. For example, ? and 2u3 are improper, whereas
are proper (in the final example, we have assumed
that ! distributes over u). Our intention is that every expression should be either
proper or bottom or a choice among propers. The requirement that 8x:T ffl P (and
similarly, only be instantiated with proper terms is a condition on the
underlying logic. In the terminology of [S-ndergaard and Sestoft 1992], we have
opted for strict singular semantics; they list alternate approaches.
Technically, an expression E of type T is proper iff 9x:T ffl E j x. Of course,
to ensure that this is a useful definition, we shall have to axiomatize each type so
that we can infer the truth or otherwise of 9x:T ffl E j x. We postulate that the
common base types Z; Char; Float; ::: are flat, i.e. 8x; y:T ffl
From this we can deduce that for flat types, E u F is proper iff E is proper and
equivalent to F . We also postulate that the constants (such as, in the case of the
integers, 0, 1, \Gamma1, 2, \Gamma2, :::, etc.) are proper. We can now formally deduce that
say, is not proper because 2 is proper and differs from 3. We write proper E
as an abbreviation for 9x:T ffl
Non-flat types (such as function types) are considerably more complex than flat
types, and determining just which expressions should be proper need not be at
all obvious, as we shall see. Because most of us came to know functions through
studying functions on the integers and reals, it is easy to be seduced into accepting
properties of functions that may hold for functions on flat types, but which are not
true in general. This is even more so in the case of specificational functions, and
we advise the reader to think of non-flat types whenever he or she is looking for
Specificational Functions \Delta 7
intuitive understanding. We have ourselves sometimes been misguided by thinking
in terms of flat types only.
For the purpose of examples, we will denote by Two a type which has (at least)
two propers u and v such that u ! v, i.e. u v. Whether such
a type exists, is of course a consequence of our design decisions. However, usually
functional types are non-flat, for example which might both
reasonably be considered proper.
For the remainder of the paper we use lower case letters for proper ex-
pressions, and upper case letters for arbitrary (possibly improper) expressions
3. THE ANOMALIES
In this section, we expose some of the problems that are encountered when a
straightforward axiomatization of functions drawn from figure 1 is used in the presence
of bottom and choice.
3.1 Beta-equivalence and extensionality
For specificational functions, the combination of axioms fi and extensionality leads
to an inconsistency. Consider -x:Z ffl x\Gammax and -x:Z ffl 0. By extensionality, we
conclude that they are equivalent, since 8x:Z ffl x\Gammax j 0. But if now we apply each
function to ? in turn, we can deduce that ? j 0! Similarly, we can deduce that
by applying the two functions to 2 u 3 in turn.
It is not fruitful to attack the anomaly by restricting the extensionality axiom,
because the two functions in question are so simple that we expect them to be
equivalent by extensionality, however restricted. The remedy we shall adopt is to
restrict the fi axiom to proper arguments. Now the anomalies disappear, since
neither ? nor 2 u 3 are proper.
Of course, we are left with the question of assigning a meaning to applications
with improper arguments. We choose to make function application strict and distribute
over choice. In symbols, E G. Now both
example functions yield ? when applied to ?, and 0 when applied to 2 u 3.
We should not forget that function types also accommodate bottom and choice.
For reasons of symmetry, we decide that function application should be strict and
distributive in the function as well as in the argument, i.e.
G. These decisions aren't controversial, since there is no reasonable
alternative, but they do lead to a quandary, as we shall see.
The above is the first of several examples of inconsistencies that arise from naively
applying the traditional laws of functions; ridding ourselves of them requires com-
promise. We choose our compromises to make life as pleasant as possible when
we put our tools to their intended use. If our purposes change, then so might our
compromises. For example, not insisting on strictness would be appropriate if the
target program language is a lazy functional language like e.g. Haskell [Peterson
and Hammond 1997]; such a calculus is explored in [Bunkenburg 1997]. In [Hehner
1993; 1998], the decision as to whether function application distributes over choice
is left open in general; it must be decided individually for every abstraction the
developer writes.
M. Morris and A. Bunkenburg
3.2 Distribution and j
The following inconsistency was discussed but not solved satisfactorily in [Meertens
1986].
Using j, we prove
their equivalence by transforming one into the other (here and elsewhere, we omit
type information to reduce syntactic clutter):
G
F:
Now we can show that 3 u 4 u 5 u 6 j 3 u 6 by applying the higher-order function
to F and G in turn:
assuming F is proper
(1
whereas for G we get:
3 u
We definitely do not want 3 u 4 u 5 u 6 j 3 u 6. (In the calculation, we have
assumed that -x ffl x u 3 is proper. We shall see later on that this is reasonable.)
The j-axiom also leads us to conclude that ? T!U and -x:T ffl ?U are equivalent:
ap. strict
Specificational Functions \Delta 9
But this runs counter to practice in imperative and functional programming
languages. In imperative languages, for example, the body of a function f supplied
as an argument to a procedure proc is not inspected at the point of invoking proc,
but only at the point of invoking f within the body of proc.
We will resolve these dilemmas by restricting the j-axiom to proper functions
only, with ? T!U and such functions as (-x ffl x) u (-x ffl being improper. We will
have much more to say about properness of functions later on.
3.3 Extensionality and distribution
Postulating distribution of application has the consequence that extensional equivalence
of functions breaks down in the presence of higher-order functions. Consider
as before F -
any proof of their
equivalence must lead to a contradiction because -h ffl
them. Previously, we proved them equivalent by j and distribution; now we prove
them equivalent by extensionality:
, extensionality
, u/ap.
We will choose to resolve the inconsistency by having function equivalence require
more than extensionality in general (it will turn out that in the case of proper
functions, extensionality is sufficient). On reflection, this restriction should not be
surprising because unrestricted extensionality identifies the functions ? T!U and
against our wishes.
3.4 Monotonicity and distribution
It is not too difficult to deduce that if function application distributes over u then
function application is monotone with respect to v, that is,
It follows that we can introduce an inconsistency if we can construct a non-monotone
function. Such a function is f : Two!U such that f u
(recall Two has two propers u and v such that u ! v). Assuming a conditional
expression whose first argument is a proposition is available, we can construct f
thus: -x:Two ffl if x j u then 1 else 2.
We shall avoid this anomaly by admitting only -abstractions -x:T ffl E such that
The restriction is not a hindrance in practice because it turns out that all operators
used in programming languages, and just about all those used in specifications
are indeed monotone. The function f above is non-monotone because j is a non-monotone
operator. Trivially, all functions on flat domains are monotone, even if
their bodies employ non-monotone constructs. It is possible to set down reasonable
M. Morris and A. Bunkenburg
syntactic rules for forming -abstractions that guarantee monotonicity (although
some acceptable -abstractions might be ruled out).
It is a little disappointing that when we refine the (monotone) body of a -
abstraction, we retain the obligation to show monotonicity - it is not necessarily
preserved by refinement. For example, if x j uthen 1u2 else 1, which is monotone,
is refined by if x j u then 2 else 1, which is not monotone! Fortunately, such
examples are rare (those we know of rely on the use of the strong j or v operators).
However, there are some theoretical surprises as we shall see later.
3.5 Monotonicity and Skolem
The Skolem axiom, (8x:T ffl9y:U fflP ) ) (9f :T !U ffl8x:T fflP [ynf x]), which promises
the existence of certain functions, seems intuitively reasonable, but in the presence
of the monotonicity requirement on functions, it may be promising the impossible.
Recall again type Two with propers u and v such that u ! v. Let P -
holds. But the
function f promised by the Skolem axiom would have to map u to 1 and v to 2,
and this is not monotone. The essence of the problem is that with the restriction
to monotone functions, the Skolem axiom promises the existence of a monotone
mapping for every possibly non-monotone mapping, and that promise cannot be
kept.
We shall not postulate Skolem as an axiom, but will be able to derive it for
"reasonable" P . For example, if the x in P (x; y) is drawn from a flat type, then
Skolem holds. We will return to this point later.
4. THE AXIOMATIZATION PROCESS
We axiomatize prescriptive and conditional expressions. We also explain our method
for axiomatizing types and their operations, using pair-types as an example. This
will serve as a model for the more complex function types to come.
4.1 Axiomatizing language constructs
Let -E abbreviate E 6j ?. The following postulate captures the intuitive idea of
refinement as described earlier:
(The conjunct (-E ) -F ) is required to distinguish between ? T and (2x:T ffl
true).) From this, and the antisymmetry of v, we can deduce:
It follows that to determine whether a given expression E is refined by another
expression F , or whether E is equivalent to F , all we need to know is whether or
not E and F are bottom, and if not then what propers refine them. So for any
language construct FOO, we need to be able to determine -FOO and FOO v x
(fresh x), and in general our strategy will be to define these by axioms. For a simple
example, see the axiomatization of u in figure 2.
The defining axioms of prescriptive expressions are given in figure 3. (The axiomatization
of prescriptive expressions makes assumptions about the underlying logic.
Specificational Functions \Delta 11
u-(E
Fig. 2. Axioms for u
Fig. 3. Axioms for prescriptive expressions
Different assumptions might be made, but they would give rise to some similar
axiomatization.)
It is possible to axiomatize some language constructs FOO by defining FOO v E
for arbitrary expression E. Then we can determine -FOO by instantiating E with
?, and we can determine FOO v x by instantiating E with x. See, for example,
the axiomatizations of conditional and assertion expressions in figure 4.
The assertion expression has the form P ?\Gamma E; it is equivalent to E if P is true, and
otherwise it is ?. Assertion expressions (called "assumptions" in [Ward 1994], and
discussed e.g. in [M-oller 1989], and increasingly becoming part of specification and
program languages, e.g. Eiffel [Meyer 1992]) serve to annotate expressions with a
little knowledge that can be used in their further refinement. For example, finding a
zero of a function on the integers might be specified as -f :Z!Z ffl (2x:Zjf
we now wish to inform the implementor that the function will only ever be invoked
for monotonically increasing functions, then we can specify
4.2 Axiomatizing types
For each new type that is introduced, we must provide axioms describing its propers.
For the case of pair types, properness seems perfectly straightforward: all proper
pairs are of the form (x; y) for proper constituents x and y, and for every proper x
and y, (x; y) is a proper pair. This is captured in the first two axioms in figure 5.
Figure
5 also gives the axiomatization of operations on pairs. Observe the shape:
there are two axioms to describe each type constructor - here, just pair forma-
tion, and two axioms to describe each type destructor - here, the two projection
functions (we just describe the left projection fst, the right projection snd is simi-
lar). The third axiom states that pair formation is strict in both arguments. The
fourth states that refinement is carried out component-wise; the formulation of the
axiom is slightly cluttered by the need to state that refinement is guaranteed if
either component is ?. Because of the intimate relationship between refinement
and choice, the import of this axiom is that pair-formation distributes over choice
in both arguments. The axioms for fst are obvious. The final axiom in fact implies
that fst distributes over choice.
Fig. 4. Axioms for conditional and assertion expressions
M. Morris and A. Bunkenburg
Fig. 5. Axioms for pair types
5. AXIOMATIZING FUNCTIONS
We axiomatize function types, following the same strategy as for pairs, and keeping
in mind the desired properties of functions collected earlier.
5.1 The core axioms
A partial axiomatization of functions is given in figure 6. Recall that we use lower
case letters for propers, and uppercase letters for arbitrary expressions. Conse-
quently, the lower case variables in figure 6 can be universally quantified over (we
have not done so to minimize syntactic clutter), but not the upper case ones. We
have also omitted types to avoid clutter. For example, the first axiom in full is
The first axiom is just familiar j-equivalence on proper functions and shows that
every proper function can be written as a -abstraction. Taking pair types as our
model, we should expect a companion axiom which determines which -abstractions
are proper. It turns out that a lot can be said without committing to that axiom,
and so we postpone further consideration of it for a moment.
The second group of axioms defines -abstraction. Axiom - ensures that our
desire to distinguish ? and -x:T ffl ? is met. Axiom -v implies our desired property
of extensional equivalence for proper functions.
The remaining axioms define the single function destructor, viz. application, in
effect asserting that function application is strict and distributive, and that application
can be reduced to substitution. The essential interplay between abstraction
and application is captured in axiom fij (there is nothing corresponding to axiom
fij in pair types). Axiom ap.- states the three ways by which function application
yields ?: either the function is ?, or the argument is ?, or the application
is normal but yields ?. Observe in both ap:- and ap.v that function application
determined by considering all applications e f where e and f are proper
refinements of E and F , respectively.
Even though we have not yet fixed what abstractions are proper, the axioms of
figure 6 imply a large body of desired theorems, including those listed in figure
7. In particular, function application is strict, distributive, and monotone on
either side. For proper functions, refinement and equivalence are extensional. A -
abstraction can be refined by refining its body (assuming monotonicity is preserved).
The bound variable in an abstraction can be renamed (this is assuming that the
logic gives renaming of universally quantified variables). Finally, moving choice out
of an abstraction is a refinement, not an equivalence as might be expected. To
see this, observe that -x:Z ffl if even x then 2 else 3 refines -x:Z ffl 2 u 3, whereas
Specificational Functions \Delta 13
E)
E) v f,(8x
fij (-x ffl E) y j E[xny]
Fig. 6. Core axiomatization of functions
ap.mon.
ap.mon. G v H
mon. E) v (-x ffl F )
-ff (-x ffl E) j (-y ffl E[xny]) x not in the free variables of E
E) u (-x ffl F )
Fig. 7. Some theorems which follow from the axioms of figure 6
has as refinements only itself, -x:Z ffl 2 and -x:Z ffl 3.
To illustrate the axioms, we prove -mon. from the axioms and then deduce -=u:
E) v (-x ffl F )
E) E)
, axioms -v
transitive
We deduce:
E) u (-x ffl F )
, u is greatest lower bound with respect to v
E) -x ffl E
, -mon.
E) -
which is a property of choice.
The axioms of figure 6 resolve all the anomalies of section 3. For example, recall
functions F -
We used the putative
equivalence of these functions in Section 3 to construct difficulties with both j
and extensionality. Although they remain extensionally equal, they are no longer
equivalent:
14 \Delta J. M. Morris and A. Bunkenburg
, (2)
, -F , -G, arbitrary f
, G; u v
- the final line does not in general hold, which can be seen by instantiating f
with -x ffl if even x then x else 3.
5.2 Proper functions
We must now address the question "What functions are proper?". First, we definitely
expect traditional functions to be proper, i.e. those abstractions -x:T ffl E
for which E is proper for all (proper) x.
Second, we have already ruled that ? T!U is not proper.
Third, consider those abstractions -x:T ffl E for which E is ? for some or all values
of x and proper for the remaining values of x, i.e. the traditional partial functions.
Assume for a moment that we deem these to be improper, and consider the extreme
example Because -x:T ffl ? is improper and not ?, it must be equivalent
to the choice over its proper refinements. But that is impossible, because by our
assumption all proper functions have proper outcomes. We conclude that we must
include the partial functions among the propers.
Finally, we are left with the question of whether functions such as -x ffl 2 u 3
whose bodies contain an unresolvable choice, are to be considered proper. There
seem to be two reasonable views. The liberal view is to accept all abstractions as
proper. The conservative one is to regard -x ffl 2 u 3 as improper, in which case it is
equivalent to the choice over the (infinite) collection of proper functions that refine
it, i.e. all functions whose application to any argument is equivalent to either 2 or
3 (but not 2 u 3), including -x ffl 2, -x ffl 3, -x ffl if even x then 2 else 3 and many
others. According to this view, an abstraction is proper iff its body is proper or
bottom.
The liberal view can be adopted by postulating the axiom
-prop: proper(-x:T ffl E)
(Incidentally, axiom - can be erased from figure 6 in the presence of this axiom
since it follows therefrom, assuming the underlying logic ensures 8x:T ffl -x.) It has
the disadvantage that the proper functions include rather exotic elements which we
will frequently have to exclude explicitly, cluttering up our specifications. On the
other hand, the liberal view is often convenient calculationally. In particular, we
can tell by appearance whether a function F is proper or not, and hence whether
F can be the subject of an instantiation, and whether fij applies when F is the
argument of a higher order function.
The conservative view is embodied in the axiom
E) , (8x:T ffl -E ) proper E)
Specificational Functions \Delta 15
It states, in effect, that the proper functions are the partial functions of mathe-
matics. However, in calculations it may be hard to prove that a given abstraction
is proper, or that there exists a proper function satisfying some property. For in-
stance, to show that -x ffl E has a proper refinement, i.e. 9f ffl (-x ffl E) v f , we have
to show 9f ffl 8x ffl E v f x. But it is not enough to exhibit a mapping from x to y x
such that 8x ffl E v y x - we must find a monotone mapping.
Although the liberal and conservative views seem roughly equally attractive on
the surface, the conservative view has to be rejected because of a subtle anomaly.
Consider function G -
else 1. This would seem to
have two refinements (other than itself), i.e. G 1
and G 2
else 1. However, G 2
has to be rejected because
it is not monotone. We can now conclude with the help of law (2) that G j G 1
, in
contradiction of our assumption that G 1
is proper while G is not! It seems that the
only reasonable escape is to include G among the propers. Now (2) implies G 6j G 1
because G has G itself as a proper refinement, whereas G 1
does not. The root
cause of this anomaly is that certain seemingly natural refinements of functions are
lost to the monotonicity requirement. Although this is true of both the liberal and
conservative views, it is not significant in the former view because the "parent"
function is retained among the proper refinements and so no information is lost.
In conclusion, specificational functions are axiomatized by figure 6, with axiom
- replaced by -prop.
5.3 Total functions
Function types bring a new subtlety to the notion of properness. Up to this point,
we have had the property that if E is proper then it has no refinements other than
itself, but this no longer holds once function types are introduced. For example,
although both functions are proper. Let us say
that an expression E (of type T , say) is total and satisfies total E, iff it has no
refinements other than itself, i.e. iff x. For example, -x:Z ffl 2
is total, but not -x:Z ffl x-0 ?\Gamma 2. On flat types (and pair types derived from flat
types), totality and properness are identical. On function types, however, totality
is stronger than properness, and in fact specializes to the usual sense of the word.
Totality is obviously an important concept. We can use it to rescue the Skolem
axiom which we have shown cannot hold for specificational functions in general.
We can prove
There still remains an irritation, however: although Skolem promises the existence
of a function f satisfying a property P , it is not guaranteed that all the
refinements of f also satisfy P . For that, we should restrict ourselves to predicates
that are monotone in y in the sense y v y 0
x. This condition is automatically satisfied when the type of y is flat.
5.4 Recursive functions
Recursively defined functions have the form -f :T !U ffl -x:T ffl E where E is of type
U and may contain free occurrence of f . This notation allows us to write a recursive
function without naming it. More usually, recursive functions are simultaneously
M. Morris and A. Bunkenburg
unfold (-f ffl -x ffl E) j -x ffl E[fn-f ffl -x ffl E]
prefix -x ffl E[fnF E) v F
Fig. 8. Axioms for recursive functions
described and named by writing f -
may occur in E. The
body of the recursive function must be monotone in f to ensure that
the fixpoint exists (more details of this are given in the section on denotational
semantics below). We axiomatize recursion in the standard way as a fixpoint, and
the least prefixpoint with respect to refinement; see figure 8 (again, types have been
omitted).
In program development, -abstractions are refined to recursive functions by recursive
refinement. Suppose we set out to implement the specification -x ffl E via
recursion. Our modus operandi is to proceed through a series of refinements beginning
with E and ending with a term of the form F [-x ffl E] where -x ffl E appears
within F. Along the path, we will have transmuted non-algorithmic notation into
algorithmic substitutes. We can now almost conclude that -f ffl -x ffl F [f ] is the
recursive function we were after, i.e. (-x ffl E) v (-f ffl -x ffl F [f ]). We say "almost"
because we have not ensured that the recursion is well-founded, or in programming
terms, that invocations of the recursive function terminate. As every programmer
knows (although not necessarily in formal terms), to ensure termination of a recursive
function, the argument of each recursive call must be less that the "incoming"
arguments with respect to some well-ordering. This added requirement is captured
in the formal statement of the recursion introduction theorem below:
Theorem -fun. If E v F [fn-y ffl y!x ?\Gamma E[xny]] for all x then (-x ffl E) v
is a well-order on the source type of f .
Proof. To begin:
E) v (-f ffl -x ffl F )
, axm unfold
E) v (-x ffl F [fn-f ffl -x ffl F ])
We prove 8x:T ffl E v F [fn-f ffl -x ffl F ] by well-founded induction
where here P [x] -
arbitrary x of type
Specificational Functions \Delta 17
( assumption of the theorem, and v transitive
mon. in expression variable f
, axm unfold
, thm -ff
, thm -mon.
5.5 Example
We construct a small program to decompose a natural number into the sum of two
squares. The starting specification is
ss -
We proceed in the standard way for such problems by choosing a finite search
space containing an i and j such that (assuming there is such an
and then repeatedly testing the pairs it contains. If the candidate (i; j) we test
are done, and if not we reduce the search space by removing
(and any other pairs we can eliminate at the same time). We choose as our
search space all the (i; j)'s such that lo - lo and hi are some
naturals, initially 0 and b
nc, respectively. In summary,
ss v -n:N ffl ss 1
(0;
\Xip
where
ss 1
lo
Let E denote the body of ss 1
. It is not difficult to conclude the following facts
about E:
lo
lo
lo
Using elementary properties of if ::: then ::: else ::: we infer
else if lo 2
else ss 1
For a termination argument, observe first that -E implies lo - hi. Second,
observe that in comparison with E, which is ss 1
(lo; hi), the absolute difference
of the two arguments of ss 1
is less by 1, and similarly for ss 1
M. Morris and A. Bunkenburg
1). Practitioners will recognize the ingredients for the well-founded ordering to
show termination when -E holds; we omit the details. When -E does not hold,
termination is not an issue. Hence, we have shown ss1 v f where
else if lo 2
else
6. BOOLEANS AND PROPOSITIONS
Naturally, every refinement calculus will have the booleans as a given basic type
(which will be a flat type). Although it is common to define the boolean connectives
as strict and distributive (see, for example [Larsen and Hansen 1996]), or as left-to-
right evaluating (see, for example, [Partsch 1990]), these turn out to be seriously
unattractive. Many of the familiar laws of boolean algebra break down, with the
result that the booleans become awkward to handle calculationally. It turns out,
however, that the boolean connectives can be extended to accommodate bottom
and choice in quite a different way such that almost all the familiar laws of boolean
algebra continue to hold (the main loss is the law of the excluded middle). This is
described in [Morris and Bunkenburg 1998a; 1998b].
Actually, because the language of a refinement calculus is used to make specifications
(which are a superset of programs), the operations in every type will be about
as rich as mathematics can provide. In particular, in specificational languages the
boolean type includes arbitrary universal and existential quantifications, even over
infinite domains. Again, it turns out that the quantifiers can be suitably extended
to allow for bottom and choice while retaining just about all the laws of predicate
calculus (the only loss is that instantiation can only be carried out with proper
terms). See [Morris and Bunkenburg 1998b] for the details, including a Hilbert-style
axiomatic presentation.
We have made few assumptions about the logic with which we reason about
specifications, and so our theory is pretty much independent of the choice. Our
own choice is unusual. Because we have had to construct a comprehensive axiomatization
of the booleans including quantifiers, we have elected to avoid duplicated
effort by adopting the same logic for reasoning about specifications. In short, we
do not distinguish between propositions and booleans. This means, for example,
that even something as exotic as E v F or E j F is a boolean expression (and
hence a specification). There are advantages and disadvantages. One disadvantage
is that refinement and equivalence are the primary sources of non-monotonicity in
-abstractions. An advantage is that it facilitates the smooth transition of specifications
to programs, because propositions can migrate effortlessly into specifica-
tions/programs, as often they want to when we formally extract programs from
specifications. See [Morris and Bunkenburg 1998b] for an example of this.
7. MODEL THEORY
In this section, we provide a denotational semantics for the language constructs we
have given. The purpose is to ensure consistency of the calculus and to underpin
our understanding of it, particularly recursion. The model is parameterized by
models of base types and a logic.
Specificational Functions \Delta 19
T \Theta U [T ] \Theta [U
Fig. 9. Interpretations of the types
The grammar of the types is type
given base types Each type is interpreted by a complete partial order
element that none of the [T ] contain. For each T , we extend the partial order
The usual way of giving semantics to a functional programming language is to
interpret every expression of type T as an element of [T ] ? . However, this approach
does not accommodate choice.
We can accommodate choice by interpreting every expression E as the subset
of [T ] ? that intuitively contains interpretations of all the possible outcomes of E.
To model recursion and refinement, we need an order on these sets. The order
appropriate for a 'total correctness' calculus (i.e. one in which E u? j ?) such as
this one is the "Smyth" order (v 0 ) (see [Plotkin 1976; Smyth 1978]), defined by
where A and B are subsets of [T ] ? . Inconveniently, (v 0
) is not antisymmetric. We
make it antisymmetric by restricting it to upclosed sets only.
The upclosure of a subset S of a partial order (P; -) is written S" - , defined by
pg. To avoid double subscripts, we abbreviate S" -T by
. Conveniently, for upclosed sets, (v 0
is identical with ('). Therefore, we
will interpret every expression of type T as an upclosed subset of [T
meaning to recursion and refinement using (').
For every type T , we denote by P [T ] the set fS
g, that is, the
upclosed subsets of [T ] ? . We abbreviate P [T
For each base type B i , a set [B i ] is given, and it will be ordered simply by equality.
Pair types are interpreted as the product of the constituent types. The function
type T!U is interpreted by the functions f from [T ] to P 1
[U ] that are monotone
in the sense that t - T u . The interpretation of base type Z, and the
interpretations of pair- and function-types, are collected in figure 9, where - 0
and
denote the left and right projections from pairs to their constituents.
Construction of a universal domain D, which is a superset of [T ] for every type
T , is clerical, using the definitions in figure 9. Readers familiar with models of the
untyped -calculus will note that we do not require D since we are dealing
with a typed language.
An environment ae is a mapping from each variable of type T to an element of
[T ]. In addition, it maps each recursion-dummy f of type T!U to an element of
Expressions are interpreted by induction on their structure. Every expression E
of type T is interpreted in environment ae by the set [E]ae, which is an element of
J. M. Morris and A. Bunkenburg
proposition interpretation in ae
Fig. 10. Interpretation of refinement and equivalence
expression X of type its interpretation [X]ae
if P then E else F if [P
c f[c]g
where ap(e; f)-= if
rec. dummy f ae f
where
Fig. 11. Interpretation of the expressions
For each constant c of base type T , we are given an element [c] of [T ]. For every
operator symbol f of the base types we are given a "matching" function [f ]. By
"matching" we mean that if the argument types of f are T its result
type is U , then [f ] is a function in [T 0
We don't give a model for the logic. Rather we assume that every proposition
P is interpreted in environment ae as some element [P ]ae. We assume that tt is the
interpretation of some theorem of the logic, and that ff is the interpretation of some
anti-theorem of the logic. With two-valued logic, the domain of propositions would
be ftt; ffg, the interpretations of true and false. We give interpretations for the
propositions
Figure
11 gives the semantics of the typed expressions. It can easily be verified
that for every expression E of type T , its interpretation [E]ae is a set upclosed with
respect to - T .
The recursive function -f ffl -x ffl E is interpreted by the fixpoint of the functional
F as defined. F acts on P [T !U ] which is a complete lattice under the order '. By
the language restriction that -x ffl E must be refinement-monotone in f , we ensure
that F is '-monotone, and therefore the generalized limit theorem ([Hitchcock and
Park 1972; Nelson 1989]) ensures that a least fixpoint exists. The mathematics of
this construction do not exclude cases where -F is the empty set, or where -F is
Specificational Functions \Delta 21
a set containing functions that map some arguments to the empty set. One should
choose the specification language in such a way that this cannot happen, and in
fact -F is an element of P 1
Finally, we give some examples. The three expressions 2, 2 u 3, and ?Z are
interpreted as f2g, f2; 3g and f?; 0; ::g. The abstraction -x:Z ffl 2 is
interpreted as the singleton set containing the function that maps every element
of [Z] to f2g. The abstraction -x:Z ffl 2 u 3 is interpreted as the set containing all
those functions mapping each element of [Z] to a non-empty subset of f2; 3g.
8. CONCLUSION AND RELATED WORK
Specificational functions are made by combining regular functions with choice and
bottom, taking special care to accommodate higher-order functions. We have shown
examples of their usefulness both in making specifications, and in extracting computational
functions from those specifications. The technical difficulties that stand
in the way of establishing a consistent set of laws for reasoning about them are well
known, and have led to severe restrictions on their use in the past (see the commentary
on related work below). We have picked our way around these difficulties
to arrive at an axiomatic treatment which does not fall foul of the anomalies. The
key elements in our approach are:
(1) a richer notion of equivalence than extensionality, based on refinement;
(2) a restriction on quantifications to range over proper elements only, where properness
is carefully defined for each type; and
(3) the imposition of a monotonicity requirement on functions.
Clearly, some compromises have to be made, but we believe that practicioners could
comfortably live with the compromises we have arrived at.
The two standard ways of combining functions and choice are to move to set-valued
functions or to generalize functions to binary relations. Here we reject
moving to set-valued functions, because in calculations it leads to frequent packing
and unpacking of values in to and out of sets even though often the set is just a
singleton set. The relational approach lends itself well to calculation (see e.g. [Bird
and de Moor 1997; Brink and Schmidt 1996]). However, we would like to stick
with functions because they capture better the directional nature of programs as
input-to-output mappings. This decision means we don't lose the direct connection
to current program languages which have functions, not relations.
Munich CIP is a development method based on a wide-spectrum language CIP-L;
a comprehensive account of it is given in [Partsch 1990]. CIP-L has similar concepts
to our choice, equivalence ("strong equality"), refinement ("descendancy"), bottom
and properness. Functions are defined as (possibly recursive) abstractions. How-
ever, there are severe limitations placed on functions in CIP-L. Most importantly,
choice and quantification over functions is forbidden. It is not clear to us how this
restriction is enforced. In this paper, bodies of -abstractions must be monotone
in the bound variable. In CIP-L that is automatic, since every language construct
is monotone anyway. All -abstractions are considered proper. The main transformation
rules are ff, fiv (called "unfold"), and our theorem E[xnF
-F , but no concise axiomatization is given in [Partsch 1990] and so it is not clear
which "rules" are axioms and which theorems.
22 \Delta J. M. Morris and A. Bunkenburg
Calculation with functions is dealt with in the thesis [Hoogerwoord 1989]. The
language has underdetermined "where" clauses that specify a value by asserting a
property it enjoys. This is closely related to Hilbert's ffl, and so refinement is pretty
much impossible.
Norvell and Hehner present a refinement calculus for expressions in [Norvell and
Hehner 1993], by giving an incomplete list of axioms for a language similar to the
one discussed here. The main differences lie in the treatment of termination and the
properness of functions. Their calculus does not have a term bottom representing
non-termination, and it is therefore a partial correctness calculus. For termination
of recursive programs they annotate the expressions with abstract execution times.
It is not clear how refinement and timing relate. An abstraction -x ffl E is a proper
function (in their terminology "an element") iff E is proper for all x. Since the
language has no bottom, that implies that only traditional functions are elements.
It seems their intention to have flat types only, which would avoid the anomalies of
our subsections 3.4 and 5.2. However, we believe their system is inconsistent, since
there are -abstractions that are non-miraculous refinements of proper functions
which implies that function types are non-flat.
In closely related work ([Hehner 1993; 1998]), a half-and-half approach to distributivity
of functions over choice in their arguments is described. They distinguish
between those occurrences of the parameter in the body of the function for which
distribution is appropriate, and those for which direct substitution of the argument
is appropriate. These occurrences can be syntactically decided, and of course are
fixed for each function. Therefore, some -abstractions do distribute over choice in
their arguments, and others don't. However, this approach seems to impose a host
of trivial concerns on the programmer. It is not clear whether there are sufficient
compensating gains.
Ward's thesis ([Ward 1994]) presents a functional specification and programming
language, defined by a semantics that models demonic and angelic choice. However,
the language is not given a proof theory, and there is no suggestion that the given
refinement laws are sufficient in practice. New refinement laws can be generated
by proving candidates sound using the semantics. All abstractions are proper;
therefore function types are non-flat, and the inconsistency described in subsection
3.4 occurs, but is not addressed.
The language of VDM includes "loose let-expressions" of the form
let x:T be s.t. P in E:
Its intended meaning is: E with x bound to an arbitrary value satisfying P . How-
ever, its axiomatization has proved elusive, and [Bicarregui et al. 1994] suggest the
approach taken by [Larsen and Hansen 1996].
Larsen and Hansen [Larsen and Hansen 1996] present a denotational semantics
for a functional language with under-determinism. The type language is extended
by comprehension-types of the form fE j x:T ffl Pg, and the expression language
is extended by choice T , under-deterministically selecting an element of the type
T . However, which element is chosen depends on the whole environment, even on
those variables that don't occur in T . The proof system is based on generalized
type inference, with propositions of the rather than equivalence and
refinement relations. Indeed, it is hard to see how equivalence and refinement would
Specificational Functions \Delta 23
fit in. Larsen and Hansen consider those lambda-abstractions proper that map
propers to bottom or propers. Therefore the anomalies of subsections 3.4 5.2 could
be reproduced, if strong operators such as j were added to the language.
Previous work by one of the present authors ([Morris 1997]) gives weakest-
condition semantics to an expression language with choice and bottom. The style
of semantics fits the weakest-precondition semantics of the imperative refinement
calculus. However, no axioms or logic is given, and issues pertaining to functions
are not addressed.
ACKNOWLEDGMENT
We thank David Watt and Richard Botting for reviewing drafts of this paper.
--R
Proof in VDM: A practioner's guide.
Algebra of Programming.
Relational Methods in Computer Science.
Supplemental Volume
Expression Refinement.
Avoiding the undefined by underspecification.
A practical theory of programming.
Unified algebra.
Induction Rules and Termination Proofs.
The design of functional programs: a calculational approach.
Semantics of under-determined expressions
Mathematical Logic and Hilbert's ffl-symbol
Eiffel: The Language.
E3: A logic for reasoning equationally in the presence of partiality.
Undefinedness and nondeterminacy in program proofs.
A generalization of Dijkstra's calculus.
Specification and Transformation of Programs.
A Powerdomain Construction.
Power domains.
A refinement calculus for nondeterministic expressions.
Systematic Programming
--TR
The/Munich Project CIP
A generalization of Dijkstra''s calculus
Specification and transformation of programs: a formal approach to software development
Eiffel: the language
Non-determinism in functional languages
A practical theory of programming
Proof in VDM
Algebra of programming
Non-deterministic expressions and predicate transformers
Relational methods in computer science
Systematic Programming
Applicative Assertions
Logical Specifications for Functional Programs
--CTR
J. M. Morris , A. Bunkenburg, A source of inconsistency in theories of nondeterministic functions, Science of Computer Programming, v.43 n.1, p.77-89, April 2002
Joseph M. Morris , Malcolm Tyrrell, Terms with unbounded demonic and angelic nondeterminacy, Science of Computer Programming, v.65 n.2, p.159-172, March, 2007 | function;refinement calculus;nondeterminacy;logic;expression |
319359 | Efficient compression of non-manifold polygonal meshes. | We present a method for compressing non-manifold polygonal meshes, i.e. polygonal meshes with singularities, which occur very frequently in the real-world. Most efficient polygonal compression methods currently available are restricted to a manifold mesh: they require a conversion process, and fail to retrieve the original model connectivity after decompression. The present method works by converting the original model to a manifold model, encoding the manifold model using an existing mesh compression technique, and clustering, or stitching together during the decompression process vertices that were duplicated earlier to faithfully recover the original connectivity. This paper focuses on efficiently encoding and decoding the stitching information. By separating connectivity from geometry and properties, the method avoids encoding vertices (and properties bound to vertices) multiple times; thus a reduction of the size of the bit-stream of about 10% is obtained compared with encoding the model as a manifold. | Table
To encode the information necessary to cluster some
vertices and recover the polygon-vertices incidence
relationship of the original model, one simple approach
(which we are not using, but explain for comparison)
would be to transmit a table such as this, with one row
per cluster. This table relates to the example of Figs. 1-3
Number of vertices in cluster Vertex references3259Fig. 4. Overall compressed syntax for a non-manifold mesh.
Another approach consists of devising special codes to indicate which vertices are copies of previously
transmitted vertices, specifying which vertex they should be clustered to. This method avoids duplicate
transmission of vertex coordinates and properties, but incurs a cost of log n bits for each vertex that was
previously transmitted.
A refined version of this approach would transmit a table with one row per cluster, as in Table 1. We
denote by m the number of clusters. (If each non-manifold vertex must be replicated during the non-
manifold-to-manifold mesh conversion process as discussed in Section 3, m is the same as the number
of non-manifold vertices in the original mesh.) For instance, m D 3 in Fig. 2. Each entry of the table
would indicate the number of vertices in the cluster and their references: this is shown in Table 1 for
the example that we are using. If the total number of vertex replications is r (r D 9inFig.2),andthe
maximum number of vertices in one cluster is c, the total cost of this approach is mlogc C r log n.
In this paper, we improve considerably upon this second method, to incur a worst-case cost of at most
log m bits for each vertex that was previously transmitted, where m is the number of clusters, which is
typically much smaller than n. Moreover, this is a worst-case cost, as we will show in Section 5 that for
many vertices, the clustering information is implicit, and requires zero bits to encode. This is achieved
using the concept of stitches, which are defined and fully described in Section 5.
1.2. Representation for compression
The mesh is compressed as indicated in Fig. 4. For each manifold connected component, the
connectivity is encoded, followed with optional stitches, and geometry 6 and properties. Stitches are
used to recover the vertex clustering within the current component and between vertices of the current
6 The vertex coordinates.
component and previous components. In this way, for each cluster, geometry and properties are only
encoded and decoded for the vertex of the cluster that is encountered first (in decoder order of traversal).
1.3.
Overview
In Section 2 we review related work. We review in Section 3 an algorithm for producing manifold
meshes starting with a non-manifold mesh. Section 4 describes a method for compressing manifold
meshes. Sections 5-8 develop methods for representing a vertex clustering using stitches. In Section 9 we
give details on the encoding process and describe a proposed bit-stream syntax for stitches. In Section 10
we study the decoding process. Our algorithms are analyzed in Section 11. In Section 12 we provide
compression results on a variety of non-manifold meshes.
2. Related work
2.1. Non-manifold mesh compression
Connectivity-preserving non-manifold mesh compression algorithms were proposed by Popovic and
Hoppe [13] and Bajaj et al. [1]. Hoppe's Progressive Meshes [10] use a base mesh and a series of
vertex insertions (specifically, inverted edge contractions) to represent a manifold mesh. While the main
functionality is progressive transmission, the encoding is fairly compact, using 30-50 bits per vertex with
coding [10]. This method was extended in [13] to represent arbitrary simplicial complexes,
manifold or not, using about 50 bits per vertex (asymptotically the cost of this method is O.nlogn/ bits, n
being the number of vertices). This method of [13] works by devising special codes to encode all possible
manifold or non-manifold attachments of a new vertex (and sustaining edges and triangles) to an existing
mesh. A code must be supplied for each vertex that is encoded. Our present method improves upon [13]
by achieving significantly smaller bit-rates (about 10 bits per vertex or so) and reducing encoding time
(admittedly, an off-line process) by more than four orders of magnitude (without levels-of-detail).
Bajaj et al.'s DAG of rings mesh compression approach [1] partitions meshes in vertex and triangle
layers that can represent a non-manifold mesh. A vertex layer is a set of vertices with the same topological
distance to an origin vertex. A vertex layer is a graph that may contain non-manifold vertices, which
correspond to branching points. Encoding the various branches requires indices that are local to the
vertex layer. In this paper, we encode the same information by indexing among a subset of the m non-manifold
vertices (those present in a stack). With the variable-length method described in Section 7, we
obtain additional savings by exploiting the adjacency between non-manifold vertices.
Another advantage of our approach is that we define a compressed syntax that can be used to
cluster vertices. This syntax can be used for encoding some changes of topology (such as aggregating
components) in addition to representing singularities.
Abadjev et al. [23] use a technique related to [10,13], and introduce a hierarchical block structure in
the file format for parallel streaming of texture and geometry.
2.2. Manifold-mesh compression
For completeness, we now discuss previous work on compressing manifold meshes, which is related
to our approach of Section 4.
Deering [5] introduced geometry compression methods, originally to alleviate 3D graphics rendering
limitations due to a bottleneck in the transmission of information to the graphics hardware (in the bus).
His method uses vertex and normal quantization, and exploits a mesh buffer to reuse a number of vertices
recently visited and avoid re-sending them. Deering's work fostered research on 3D mesh compression
for other applications. Chow [3] extended [5] with efficient generalized-triangle-strip building strategies.
The Topological Surgery single-resolution mesh compression method of Taubin, Rossignac et al. [19,
20] represents a connected component of a manifold mesh as a tree of polygons (which are each
temporarily decomposed into triangles during encoding and recovered after decoding). The tree is
decomposed into runs, whose connectivity can be encoded at a very low cost. To recover the connectivity
and topology, this tree is completed with a vertex tree, providing information to merge triangle edges.
The method of [19] also encodes the vertex coordinates (geometry) and all property bindings defined in
Touma and Gotsman [22] traverse a triangular (or polygonal) mesh and remove one triangle at a time,
recording vertex valences 7 as they go and recording triangles for which a boundary is split in two as a
separate case.
Gumhold and Strasser [9] and Rossignac [14] concentrate on encoding the mesh connectivity. They
use mesh traversal techniques similar to [22], but instead of recording vertex valences, consider more
cases depending on whether triangles adjacent to the triangle that is being removed have already been
visited. Another relevant work for connectivity compression is by Denny and Sohler [6].
Li and Kuo's [12] dual graph approach traverses polygons of a mesh in a breadth-first fashion, and
uses special codes to merge nearby (topologically close) polygons (serving the same purpose as the
vertex graph in the approach of [19]) and special commands to merge topologically distant polygons (to
represent a general connectivity-not only a disk).
3. Cutting a non-manifold mesh to produce manifold meshes
We briefly recall here the method of Guziec et al. [8] that we are using. For each edge of the polygonal
mesh, we determine whether the edge is singular (has three or more incident faces) or regular (with two
incident faces). Edges for which incident faces are inconsistently oriented are also considered to be
singular for the purpose of this process of converting a non-manifold to a manifold. For each singular
vertex of the polygonal mesh, the number of connected fans of polygons incident to it is determined. 8
For each connected fan of polygons, a copy of the singular vertex is created (thereby duplicating singular
vertices). The resulting mesh is a manifold mesh. The correspondences between the new set of vertices
comprising the new vertex copies and the old set of vertices comprising the singular vertices is recorded
in a vertex clustering array. This process is illustrated in Fig. 2.
This method admits a number of variations that moderately alter the original mesh connectivity
(without recovering it after decoding) in order to achieve a decreased size of the bit-stream: polygonal
faces with repeated indices may be removed. Repeated faces (albeit with potentially different properties
attached) may be removed. Finally, the number of singular edges may be reduced by first attempting to
7 Number of incident polygons.
8 A fan of polygons at a vertex is a set of polygons incident to a vertex and connected with regular edges. A singular vertex
is simply a vertex with more than one incident fans.
invert the orientation of some faces in order to reduce the number of edges whose two incident faces are
inconsistently oriented.
An interesting alternative for converting a non-manifold mesh to a manifold mesh by vertex replication
was recently introduced by Rossignac and Cardoze [16]. Rossignac and Cardoze minimize the number
of vertex replications when converting non-manifold solids to manifold solid representations. In the
Rossignac and Cardoze method, an edge cannot be uniquely identified with a pair of vertices: for instance
two edges (and four faces) can share the same two endpoints. In the method of Section 4 however, we
have used the assumption that an edge could be uniquely identified using two vertices, which allows
a simple representation and encoding for a vertex graph, and does not require to output a list of edges
when decoding the compressed representation of the mesh: the edges are implicitly represented by the
polygon-vertices incidence relation.
4. Compressing manifold meshes
The method described in this section extends the Topological Surgery method [19], and is explained
in detail in [11]. In [19] the connectivity of the mesh is represented by a tree spanning the set of vertices,
a simple polygon, and optionally a set of jump edges. To derive these data structures a vertex spanning
tree is first constructed in the graph of the mesh and the mesh is cut through the edges of the tree. If
the mesh has a simple topology, the result is a simple polygon. However, if the mesh has boundaries
or a higher genus, additional cuts along jump edges are needed to obtain the simple polygon. This
simple polygon is then represented by a triangle spanning tree and a marching pattern that indicates
how neighboring triangles are connected to each other. The connectivity is then encoded as a vertex
tree, a simple polygon and jump edges. In this paper the approach is slightly different. First, a triangle
spanning tree is constructed. Then the set of all edges that are not cut by the triangle tree are gathered
into a graph. This graph, called Vertex Graph, spans the set of vertices, and may have cycles. Cycles are
caused by boundaries or handles (for higher genus models). The vertex graph, triangle tree, and marching
pattern are sufficient to represent the connectivity of the mesh.
In [19], geometry and properties are coded differentially with respect to a prediction. This prediction
is obtained by a linear combination of ancestors in the vertex tree. The weighting coefficients are
chosen to globally minimize the residues, i.e., the difference between the prediction and the actual
values. In this paper the principle of linear combination is preserved but the triangle tree is used
instead of the vertex tree for determining the ancestors. Note that the parallelogram prediction [22] 9
is a special case of this scheme, and is achieved through the appropriate selection of the weighting
coefficients.
Coding efficiency is further improved by the use of an efficient adaptive arithmetic coder [18].
Arithmetic coding is applied to all data, namely connectivity, geometry and properties.
Finally, the data is ordered so as to permit efficient decoding and on-the-fly rendering. The vertex
graph and triangle tree are put first into the bit stream. The remaining data, i.e., marching pattern,
geometry, and properties, is referred to as triangle data and is put next into the bit stream. It is organized
on a per-triangle basis, following a depth-first traversal of the triangle tree. Therefore a new triangle
9 Which extends a current triangle to form a parallelogram, with the new parallelogram vertex being used as a predictor.
may be rendered every time a few more bits, corresponding to the data attached to the triangle, are
received.
5. Representing the vertex clustering using stitches
5.1. Decoding order and father-child relationship: v_father
The methods developed in the present paper rely on the availability of two main elements: (1) a
decoding order for the mesh vertices, and (2) for the variable-length method described in Section 7, a
father-child relationship between vertices allowing to define paths of vertices. We next suppose that this
v_father
father-child relationship is recorded in an array called v_father, representing a function f1;:::;ng !
f1;:::;ng,wheren is the number of vertices.
All of the manifold mesh compression methods reviewed in Section 2 can provide these two elements,
which makes the methods of this paper widely applicable: an order in which vertices are decoded is
always available, and a father-child relationship can by default be realized by each vertex pointing to the
vertex decoded just before as its father (the first vertex being its own father).
Our assumption is that the information consigned in v_father is implicit (provided for free by the
manifold mesh compression method), and requires no specific encoding. In the following we assume
without loss of generality that vertices are enumerated in the decoder order of traversal. (If this is not the
case, we can perform a permutation of the vertices.)
Preferably, the v_father array will contain additional information, that can be exploited by our
algorithms. For instance, Fig. 5 shows v_father for the example of Fig. 2, obtained using the Topological
Surgery method [11]. In the particular case of Topological Surgery, v_father represents a digraph whose
nodes are mesh vertices, edges are mesh edges, and such that each node has out-degree one: v_father is a
forest that also admits self-loops.
Fig. 5. (a) v_father (father-child) relationship for the example of Fig. 2, as generated by the Topological Surgery
method. Since we are only concerned with topology, in this figure and the following figures, we are representing
the mesh of Fig. 2 in wireframe mode, where dashed lines represent edges shared by two back-facing triangles.
(b) In the particular case of Topological Surgery, v_father is a forest that also admits self-loops. In the following,
we will omit to draw self-loops.
5.2. Stitches
A stitch is an operation that clusters a given number of vertices along two specified (directed) paths
of vertices, wherein a path of vertices is defined by following a father-child relationship. There is no
ambiguity in following this path, which corresponds to going towards the root of a tree. A stitch as
defined above is also called a forward stitch. A stitch is specified by providing a pair of vertices and a
length. The vertex clustering is accomplished by performing a series of stitches.
We may sometimes want to go down the father-child tree as opposed to up the tree (towards the root).
This is a priori ambiguous, but the following definition removes the ambiguity: a reverse stitch works
by starting with two vertices, following the path defined by the father-child relationship for the second
vertex and storing all vertices along the path in a temporary structure, and clustering vertices along the
path associated with the first vertex together with the stored vertices visited in reverse order.
We introduce two methods, called Stack-Based and Variable-Length, for representing (or decompos-
ing) the vertex clustering in a series of stitches. These are two alternate methods that the encoder should
choose from. The latter method presents more challenges but allows a much more compact encoding,
as reported in Section 12. We define a bit-stream syntax that supports both possibilities, and it is not
required that the encoder use the more advanced feature.
5.3. Vertex clustering array: v_cluster
Both the stack-based and variable-length methods take as input a vertex clustering array, which for
v_cluster
convenience we denote by v_cluster .f1;:::;ng ! f 1;:::;ng/.
To access vertices through v_cluster, we propose the convention that v_cluster always indicate the
vertex with the lowest decoder order: supposing that vertices 1 and 256 belong to different components
but cluster to the same vertex, it is better to write v_clusterT1UDv_clusterT256UD1thanv_clusterT1UD
v_clusterT256UD256. As the encoder and decoder build components gradually, at some point Vertex 1
will be a physical vertex of an existing component, while Vertex 256 will be in a yet-to-be-encoded
component. Accessing Vertex 1 through Vertex 256 would increase code complexity.
The stack-based and variable-length methods are developed in the following sections. The variable-length
method exploits the information in the v_father forest while the stack-based method does not.
The stack-based method can be explained and implemented without requiring the notion of stitches: the
stitches used for the stack-based method all have zero length, and associate a vertex of higher decoder
order with a vertex of lower decoder order. However, stitches are at the core of the variable-length method,
which is more efficient. It is useful to combine the two methods in a single framework and using a single
compressed syntax.
6. Stack-based method
We wish to encode efficiently m clusters that affect r replicated vertices, while the rest of the n r
vertices are not affected. (We can also say that their cluster size is 1.) The main idea behind this method
is that provided that we can keep a stack of cluster representatives, only log m bits will be necessary for
each of the r vertices to indicate which cluster they belong to. We can compress this information even
Fig. 6. Stack-based method applied to the example of Fig. 1. To represent stitches pictorially, we are using lassos
and wiggles.
further if only a portion of the m cluster representatives are present in a stack at a given time, by using as
many indices as elements present in the stack.
For this purpose, we use a stack-buffer, similarly to Deering [5] and other manifold mesh compression
modules (see [11]). A stack would only support push and pop operations. We denote by stack-
buffer a data structure that supports the get operation as well as direct indexing into the stack for get
and pop operations.
We push, get and pop in the stack-buffer the vertices that cluster together. Connected components can
be computed for the vertex clustering, such that two vertices belong to the same component if they cluster
to the same vertex. In the decoding order, we associate a stitching command to each vertex. If the size
of the vertex's component is one, the command is NONE. (This is one of the n r unaffected vertices.)
Otherwise, the command is either PUSH, or GET(v), or POP(v) depending on the decoding order of the
vertices in a given component, where v is an index local to the stack-buffer. The vertex that is decoded
first is associated with a PUSH; all subsequently decoded vertices are associated with a GET(v) except
for the vertex decoded last, that is associated with a POP(v), whereby v is removed from the stack-buffer.
For the example of Fig. 1 we illustrate the association of commands to vertices in Fig. 6. Each vertex is
labeled using its decoder order, and the corresponding command is displayed in the vicinity of the vertex.
In order to achieve this local indexing to the set of clusters and thus avoid incurring a logn cost when
encoding vertex repetitions, we need to provide a code (stitching command) for each vertex: NONE,
PUSH, GET or POP. As discussed in Section 9, the NONE command uses one bit, which may be further
compressed using arithmetic coding [18] (when considering the sequence of commands). An alternative
to this one-bit-per-vertex cost would be to provide a list of cluster representatives, each requiring a log n
index to encode. We have not selected this approach.
In addition to providing lower bit rates, another advantage of our approach is that the command
that is associated with each vertex may be used to decide whether or not to encode its coordinates
and properties: geometry and properties may only be encoded for NONE and PUSH vertices. We can
thus easily interleave connectivity and geometry in the bitstream, allowing incremental decoding and
rendering (see Section 4). This can be done without the overhead of encoding a table of clusters.
One drawback of the stack-based method is that it requires to send one stitching command different
from NONE (either PUSH, GET or POP) for each of the r vertices that are repeated (that cluster to a
singular vertex). In the next section, we explain how the variable-length method exploits the situation
when cluster members are adjacent in the v_father forest in order to replace as many PUSH, GET and
POP commands as possible with a NONE command (that requires only one bit).
7. Variable-length method
7.1. Principle of the method
If we take a closer look at Fig. 5 we realize that the clusters (2,5) and (1,3) are such that 1 is the father
of 2 and 3 is the father of 5. This is an example of a situation where we can add length to a stitch as
defined in Section 5: stitch vertex 5 to vertex 2, with a stitch length of 1. A pictorial representation of this
stitch is provided in Fig. 7(a). We will thus push 2 in the stack; Vertex 5 will be associated with a GET(2)
command and a stitch length of 1. This will have the effect of fetching the fathers of 2 and 5, 1 and 3,
and clustering them. (If the length was 2, we would fetch the fathers of 1 and 3 and cluster them as well,
and so on.) The advantage of this is that vertex 1 and vertex 3 do not require a specific PUSH, GET, or
POP command: we can associate them with a NONE command, which requires significantly fewer bits.
With a suitable v_father father-child relationship, we expect the above situation to occur frequently. In
particular, in Topological Surgery, all edges of a mesh boundary except one belong to v_father.Sincewe
expect a lot of stitching to occur along boundaries, the paths defined by v_father will be very valuable.
Fig. 7. Three stitches of variable length and direction encode the vertex clustering of Fig. 3: (a) length 1,
(b) length 0, (c) length 2 and reverse direction.
Using the example of Fig. 3, we illustrate in Fig. 7 how variable length stitches can be used to represent
the complete vertex clustering: three stitches are applied to represent v_cluster: one (forward) stitch of
length 1 that was discussed above, one stitch of length zero (6, 0), and one reverse stitch of length 2
(10, 6). According to the definition provided in Section 5, the reverse stitch is performed as follows: we
retrieve the father of 6, 5, and the father of 5, 3. We cluster 10 with 3, then the father of 10, 9, with 5 and,
finally, the father of 9, 7, with 6.
In the remainder of this section, we explain how to discover such stitches from the knowledge of the
v_cluster and v_father arrays.
7.2. Discovering the variable-length stitches
A good working hypothesis states that: the longer the stitches, the fewer the commands, and the smaller
the bit-stream size. We propose a greedy method that operates as follows. We generate the longest stitch
starting at each vertex, and we perform the stitches in order of decreasing length. The justification of
this is that, assuming all stitches have the same direction, redundant encodings can be avoided. This is
illustrated in Fig. 8: when a stitch is not extended to its full possible length (stitch 1), another stitch
(stitch could encode redundant information, unless it is broken up in smaller stitches. However, if all
stitches are always extended to their full possible length, subsequent stitches may simply be shortened if
necessary to avoid redundancy (instead of broken up).
We can thus safely apply all the (forward) stitches one after the other in order of decreasing
for each stitch, we simply recompute its length appropriately (for instance, stitch 2 in Fig. 8 should be of
length 1, and not 3). (When reverse stitches are introduced, however, the situation is more complex, as
illustrated in Fig. 12 and discussed in Section 8.)
The method first computes for each vertex that clusters to a singular vertex the longest possible
forward stitch starting at that vertex: a length and one or several candidate vertices to be stitched with are
determined. As illustrated in Fig. 9(a), starting with a vertex v0, v0 2f1;:::;ng, all other vertices in the
same cluster are identified, and v_father is followed for all these vertices. From the vertices thus obtained,
the method retains only those belonging to the same cluster as v_fatherTv0U. This process is iterated until
(a) (b) (c)
Fig. 8. Justification of the longest-stitch-first strategy: Supposing stitch 2 is performed after stitch 1 (a), the stitching
information in light gray (b) will be encoded twice. (c) With our strategy this cannot happen, since stitch 1 can,
and will, be prolonged to a stitch of length 4, and stitch 2 will be shortened to a length of 1.
Fig. 9. Computing the longest possible stitch starting at a vertex v0. Ovals indicate clusters. (a) Forward stitch of
length 3 with v1. (b) Backward stitch of length 4 with v2.
the cluster contains a single vertex. The ancestors of vertices remaining in the previous iteration (vf is
the successor of v0 ending the stitch in Fig. 9(a)) are candidates for stitching (v1 in Fig. 9(a)). Special
care must be taken with self-loops in v_father in order for the process to finish and the stitch length to
be meaningful. Also, in our implementation we have assumed that v_father did not have loops (except
self-loops). In case v_father has loops we should make sure that the process finishes.
Starting with vf, the method then attempts to find a reverse stitch that would potentially be longer. This
is illustrated in Fig. 9(b), by examining vertices that cluster with v_fatherTvfU,suchasv2. The stitch can
be extended in this way several times. However, since nothing prevents a vertex v and its v_fatherTvU from
belonging to the same cluster, we must avoid stitching v0 with itself.
All potential stitches are inserted in a priority queue, indexed with the length of the stitch. The method
then empties the priority queue and applies the stitches in order of decreasing length until the vertex
clustering is completely represented by stitches.
The next section discusses details of the variable-length method, that are important for a successful
implementation. These details are not necessary, however, to understand the rest of this paper starting
with Section 9.
8. Details of the variable-length method
8.1. Decoder order of connected components
The representation method must respect and use the decoder order of connected components of the
manifold mesh. As mentioned in Section 1, independently of the number of vertices that cluster to a given
vertex, geometry and properties for that vertex are encoded only once, specifically for the first vertex of
the cluster that is decoded. Connectivity, stitches, geometry and properties are encoded and decoded on
Fig. 10. Potential problems with variable-length stitches. (a) The clustering between components 1 and 2 is decoded
only when component 3 is. (b) To successfully encode these two stitches we must substitute vertices 12 with 7 in
the first one. (c) No possible re-combination using endpoints 3, 7 and 12 is possible.
a component-per-component basis to allow progressive decoding and visualization (see Fig. 4(c)). This
implies that after decoding stitches corresponding to a given component, say component m, the complete
clustering information (relevant portion of v_cluster) for component m as well as between component m
and the previously decoded components 1;:::;m 1 should be available. If this is not so, there is a
mismatch between the geometry and properties that were encoded (too few) and those that the decoder
is trying to decode, with potentially adverse consequences.
The stack-based method generates one command per vertex, for each cluster that is not trivial (cardinal
larger than one), and will have no problem with this requirement. However, when applying the variable-length
search for longest stitches on all components together, the optimum found by the method could
be as in Fig. 10(a), where three components may be stitched together with two stitches, one involving
components 1 and 3 and the second involving components 2 and 3.
Assuming that the total number of manifold components is c, our solution is to iterate on m,the
component number in decoder order, and for m between 2 and c, perform a search for longest stitches on
components 1;2;:::;m.
8.2. Decoder order of vertices
The longest stitch cannot always be performed, because of incompatibilities with the decoder order
of vertices: a vertex can only be stitched to one other vertex of lower decoder order. The example in
Fig. 10(b) illustrates this: the (12,3) and (12,7) stitches cannot be both encoded. However, the (12,3)
stitch may be substituted with (7,3) which is an equally long stitch, and therefore listed in the priority
queue. The case of Fig. 10(c) requires more work, because no possible re-combination is possible using
endpoints 3, 7 and 12.
Since problems only involve vertices that start the stitch, it is possible to split the stitch in two stitches,
one being one unit shorter and the other being of length zero. Both stitches are entered in the priority
queue.
Fig. 11. Vertices marked with an o and an x
may be stitched together, since this corresponds to
the longest possible stitches. However, the complete Fig. 12. A reverse stitch (between vertices marked
clustering of four vertices (circled with the black with an x) may be interrupted, because of a
curve) is not completely represented in this fashion. previous forward stitch, and vice versa.
For stitches of length zero, the incompatibility with the decoder order of vertices can always be
resolved. In Fig. 10(b), for stitching 3 vertices, we can consider three stitching pairs, only one of which
is being rejected. Since for stitches of length zero the direction of the stitch does not matter, all other
stitching pairs are valid.
8.3. Generating enough stitches
The method generates the longest stitch starting at each vertex. It is possible that this may not provide
enough stitches to encode all the clusters. This is illustrated in Fig. 11. In this case the method can finish
encoding the clusters using zero-length stitches similarly to the stack-based method.
8.4. Competing forward and reverse stitches
Finally, because forward and reverse stitches compete with each other, the situation illustrated in
Fig. 12 may occur: an isolated pair of vertices along a forward stitch may have been clustered by the
operation of a reverse stitch that was performed earlier. To avoid redundancy, as the pair of vertices was
already clustered, no subsequent stitch should incorporate them. Our method will detect this situation
and split the stitch performed last in two shorter stitches.
Once a working combination of stitches is found, the last step is to translate them to stitching
commands. This is the object of the next section which also specifies a bit-stream syntax.
9. Stitches encoding
To encode the stitching commands in a bit-stream, we propose the following syntax, that accommodates
commands generated by both the stack-based and variable-length methods. To specify whether
there are any stitches at all in a given component, a Boolean flag has_stitches is used. In addition to
the PUSH, GET and POP commands, a vertex may be associated with a NONE command, as discussed
above. In general, because a majority of vertices are expected to be non-singular, most of the commands
should be NONE. Three bits called stitching_command, pop_or_get and pop are used for coding the
commands NONE, PUSH, GET and POP as shown in Fig. 13.
A stitch_length unsigned integer is associated with a PUSH command. A stack_index unsigned
integer is associated with GET and POP commands. In addition, GET and POP have the following
parameters: differential_length is a signed integer representing a potential increment or decrement with
respect to the length that was recorded with a previous PUSH command or updated with a previous GET
and POP (using differential_length). push_bit is a bit indicating whether the current vertex should be
pushed in the stack, 10 and reverse_bit indicates whether the stitch should be performed in a reverse
fashion.
We now explain how to encode (translate) the stitches obtained in the previous sections in compliance
with the syntax that we defined. Both encoder and decoder maintain an anchor_stack across manifold
connected component for referring to vertices (potentially belonging to previous components). For the
stack-based method, the process is straightforward: in addition to the commands NONE, PUSH, GET
and POP encoded using the three bits stitching_command, pop_or_get and pop, a PUSH is associated
with stitch_length D 0. GET and POP are associated with a stack_index that is easily computed from
the anchor_stack.
For the variable-length method, the process can be better understood by examining Fig. 14. In
Fig. 14(a) we show a pictorial representation of a stitch. A vertex is shown with an attached string of
edges representing a stitch length, and a stitch_to arrow pointing to an anchor. Both vertex and anchor
are represented in relation to the decoder order of (traversal of) vertices.
The stitch_to relationship defines a partition of the vertices associated with stitching commands. In
Fig. 14(b) we isolate a component of this partition. For each such component, the method visits the
vertices in decoder order (v0;v1;v2;v3 in Fig. 14(b)). For the first vertex, the command is a PUSH.
Subsequent vertices are associated with a GET or POP depending on remaining stitch_to relationships;
for vertices that are also anchors, a push_bit is set. Incremental lengths and reverse_bits are also
computed. Fig. 14(c) shows the commands associated with Fig. 14(b). For the example of Fig. 1 that we
have used throughout this paper, the final five commands different from NONE are gathered in Table 2.
After the commands are in this form, the encoder operates in a manner completely symmetric to the
decoder which is described in detail in Section 10, except that the encoder does not actually perform the
stitches while the decoder does. Fig. 15 lists pseudo-code for the encoder.
have an associated push_bit there are fewer PUSH than POP commands (although this seems
counter-intuitive). We have tried exchanging the variable length codes for PUSH and POP, but did not observe smaller bit-streams
in practice; we attributed this to the arithmetic coder.
Fig. 13. Syntax for stitches. Xs indicate variables associated with each command.
Fig. 14. Translating stitches to the bit-stream syntax.
Table
Five commands (different from NONE) encoding the complete clustering of Fig. 3. The stack-based
encoding shown in Fig. 6 requires nine
stitch stack differential push reverse
Vertex Command _length _index _length _bit _bit
encoded(anchor_stack){
if(has_stitches==true){
encode has_stitches;
for (i=nV0; i< nV1; i++){ //nV0 is the first vertex
// of the current component, and nV1 -1 is the last vertex
encode stitching_command;
if(stitching_command){
encode pop_or_get;
if (pop_or_get){
encode pop;
encode stack_index,
retrieve stitching_anchor from anchor_stack;
if (pop){
remove stitching_anchor from anchor_stack;
encode incremental_length;
if(incremental_length!=0){
encode incremental_length_sign;
push i to the back of anchor_stack
retrieve stitch_length at stiching_anchor;
if(total_length >
encode reverse_bit;
save total_length at stitching_anchor;
encode stitch_length;
push i to the back of anchor_stack;
} // end for
Fig. 15. Pseudo-code for the stitches encoder.
10. Stitches decoding
The decoder reconstructs the v_cluster information that should be applied to vertices to reconstruct
the polygonal mesh. The following pseudo-code shown in Fig. 16 summarizes the operation of the
stitches decoder: if the Boolean has_stitches in the current connected component is true, then for each
vertex of the current component in decoder order, a stitching command is decoded. If the Boolean
value stitching_command is true, then the Boolean value pop_or_get is decoded; if the Boolean value
pop_or_get is false, an unsigned integer is decoded, and associated to the current vertex i as an anchor
(to stitch to). The current vertex i is then pushed to the back of the anchor_stack.Ifpop_or_get is true,
then the Boolean value pop is decoded, followed with the unsigned integer value stack_index.
decode_stitches_for_a_connected_component (anchor_stack){
if (has_stitches == true)
is the first vertex
// of the current component, and nV1 -1 is the last vertex
if(stitching_command){
if(pop_or_get){
retrieve anchor from anchor_stack;
if(pop){
remove stitching_anchor from anchor_stack;
if (incremental_length!=0){
push i to the back of anchor_stack
retrieve stitch_length at anchor;
if(total_length >
stitch i to anchor for length of total_length and in reverse if (reverse_bit);
push i to the back to anchor_stack;
save stitch_length at anchor i;
} // end for
Fig. 16. Pseudo-code for the stitches decoder.
Using stack_index,ananchor is retrieved from the anchor_stack.Thisistheanchor that the current
vertex i will be stitched to. If the pop Boolean variable is true, then the anchor is removed from the
anchor_stack. Then, an integer differential_length is decoded as an unsigned integer. If it is different
from zero, its sign (Boolean differential_length_sign) is decoded, and is used to update the sign of
differential_length.Apush_bit Boolean value is decoded. If push_bit is true, the current vertex i is
pushed to the back of the anchor_stack. An integer stitch_length associated with the anchor is retrieved.
A total_length is computed by adding stitch_length and differential_length;iftotal_length is greater
than zero, a reverse_bit Boolean value is decoded. Then the v_cluster array is updated by stitching the
current vertex i to the stitching anchor with a length equal to total_length and potentially using a reverse
stitch. The decoder uses the v_father array to perform this operation. To stitch the current vertex i to the
stitching anchor with a length equal to total_length, starting from both i and the anchor at the same time,
we follow vertex paths starting with both i and the anchor by looking up the v_father entries total_length
times, and for each corresponding entries (i,anchor), (v_father[i],v_father[anchor]), (v_father[v_father[i]],
v_father[v_father[anchor]]), ::: we record in the v_cluster array that the entry with the largest decoder
order should be the same as the entry with the lowest decoder order. For instance, if .j > k/,then
v_cluster[j] D kelsev_cluster[k] D j. v_cluster defines a graph that is a forest. Each time an entry in
v_cluster is changed, we perform path compression on the forest by updating v_cluster such that each
element refers directly to the root of the forest tree it belongs to.
If the stitch is a reverse stitch, then we first follow the v_father entries starting from the anchor for a
length equal to total_length (from vertices 6 through 3 in Fig. 7), recording the intermediate vertices in a
temporary array. We then follow the v_father entries starting from the vertex i and for each corresponding
entry stored in the temporary array (from the last entry to the first entry), we update v_cluster as explained
above.
11. Analysis
11.1. Correctness
We wish to determine if both the stack-based and variable-length methods can be used in combination
with a manifold mesh encoding technique to encode any non-manifold mesh.
We first observe that a non-manifold mesh can always be converted to a set of manifold meshes by
cutting through singular vertices and edges (including edges for which incident faces have an inconsistent
orientation). The cut is performed by duplication of vertices. The inverse operation is to aggregate
(cluster) the vertices that were duplicated. We may thus determine whether any clustering of some or
all of the vertices of a mesh can be represented with either stack-based or variable-length method.
The stack-based method may clearly represent any clustering by virtue of the construction of Section 6
which we recall here briefly: a clustering of vertices is a partition of the set of vertices. Vertices may be
globally enumerated. Inside each component of the partition, we may enumerate the vertices according
to the global order. The first vertex is associated with a PUSH, the last vertex with a POP, and all
intermediate vertices with a GET.
The variable-length method may also represent any clustering, for the simple reason that the variable-length
method is a generalization of the stack-based method (if all stitches have a length of zero).
11.2. Computational complexity
We focus here on the computational complexity associated with the encoding and decoding of stitches;
studying the computational complexity of manifold mesh encoding and decoding belongs to the relevant
publications [1,9,12,14,22], and we only summarize the current analysis here: the authors of the above-
referenced publication report a computational complexity that is linear in the number of mesh vertices
and triangles for most methods, with for some methods a non-linear storage cost as pointed out in [17].
We now concentrate first on the computational complexity of encoding stitches, followed with the
computational complexity of decoding.
The complexity is determined by the use of a stack-buffer. A subset of m vertices among the n vertices
of the mesh are replicated, and there are a total of r replications. These r vertices are pushed and popped
inside a stack-buffer. We assume that at a given time, no more than k vertices are stored in the stack
buffer. We necessarily have k 6 m. We need to determine the complexity of maintaining a stack-buffer
allowing to index elements with indices between 0 and k 1. As vertices are added to the stack-buffer,
we can use the depth in the stack as an index for vertices. However, when vertices are removed from the
stack, we need to reassign unused indices without perturbing the indices of vertices that are still present
in the stack (which need to be accessed directly using their original index). A queue may be used to track
unused indices and reassign them (in any order). The cost of inserting or removing an element from the
queue is constant. However, there is no guarantee to always assign the smallest index possible, which
could have negative effects on the size of the encoding. This area is open for further investigation. We
have thus established that stack-buffer operations may be performed in constant time.
However, indices between 0 and k 1 must be inserted in the bit-stream. Since we cannot assume any
particular coherence between the indices, the worst-case cost of encoding these indices will be O.log k/.
Thus, the cost of processing all r vertices subjected to clustering will be bounded with O.r log k/,and
thus O.r log m/ in the worst case.
In the case of the variable-length method, the process of determining the longest stitch starting at each
of the m vertices has a worst-case complexity of O.m2/, while the process of sorting stitches has no
non-linear contribution (sorting integers can be done in linear time using Bucket Sort [4]).
In summary, encoding can be performed in O.n Cmlogm/ time for the stack-based method, while the
worst case for the variable-length method is bounded by O.n C m2/.
In terms of decoding, except for the caveat formulated below, there is no difference in worst-case
complexity between the two methods which is O.n C mlog m/, where again n is the number of vertices
of the mesh and m is the number of vertices amongst n that are subjected to clustering.
In the case of the variable-length method, we note that a bad encoder could provoke situations as
depicted in Fig. 8(b), where the clustering between vertices is redundantly encoded. We have assumed
for our complexity estimate above that the encoder would not do this. However, the bit-stream syntax
does not provide guarantees against this behavior.
11.3. Storage cost
The worst-case storage cost associated with the stitching is as follows. For the encoding, the stack-based
method has a worst-case storage cost of m (maximum depth of the stack). The variable-length
method has a worst-case storage cost of O.mc/,wherec is the maximum number of vertices in a cluster
as defined in Section 1. This cost corresponds to the storage of the set of candidate stitches.
For the decoding, both methods have a worst-case storage cost of m.
12. Experimental results
12.1. Test meshes
We report detailed data on a set of 14 meshes, and at the same time general statistics on a set of 303
meshes that were used for validation experiments during the MPEG-4 standardization process [11].
The 14 meshes are illustrated in Fig. 17. They range from having a few vertices (5) to about 65,000.
The meshes range from having very few non-manifold vertices (2 out of 5056 or 0.04%) to a significant
Fig. 17. Test meshes.
proportion of non-manifold vertices (up to 88 % for the Sierpinski.wrl model). One mesh was manifold
and all the rest of the meshes were non-manifold. (The manifold mesh will be easily identified by the
reader in Table 3.) One model (Gen_nm.wrl) had colors and normals per vertex. It was made non-manifold
by adding triangles. The Engine model was originally manifold, and made non-manifold by
applying a clustering operation as described in [15]. We synthesized the models Planet0.wrl, Saturn.wrl,
Sierpinski.wrl, Tetra2nm.wrl. All other models were obtained from various sources and originally non-manifolds
12.2. Test conditions
The following quantization parameters were used: geometry (vertex coordinates) was quantized to
bits per coordinate, colors to 6 bits per color, and normals to 10 bits per normal. The coordinate
prediction was done using the parallelogram prediction [22], the color prediction was done along the
triangle tree, and there was no normal prediction. Using 10 bits per coordinate, there are no noticeable
differences between the original and decoded models in most cases. In the case of the Engine model,
however, some quantization artifacts are visible when using 10 bits per coordinate, that disappear when
using bits per coordinate. We illustrate two of the larger test models before compression and after
decompression in Figs.
12.3. Test results
Table
3 provides compressed bit-stream sizes for the 14 meshes and compares the bit-stream sizes
when meshes are encoded as non-manifolds or as manifolds (i.e., without the stitching information, and
with redundent coordinates for the vertices that are repeated). There is an initial cost for each mesh on
the order of 40 bytes or so, independently of the number of triangles and vertices.
In case of smooth meshes, the connectivity coding, prediction and arithmetic coding seem to divide
by three or so the size of quantized vertices: for instance, starting with 10 bits per vertex of quantization,
a typical bit-stream size would be on the order of 10 bits per vertex and 5 bits per triangle (assuming a
Fig. 18. (a) Symmetric-brain model before compression. (b) After starting bits of quantization
per vertex coordinate the complete compressed bit-stream uses 17.2 bits per vertex.
Fig. 19. (a) Engine model before compression. (b) After decompression (starting with 10 bits of quantization per
vertex coordinate). Some artifacts may be seen at this level of quantization. (c) After decompression (16 bits). No
visible artifacts remain.
manifold mesh without too many boundaries). In case of highly non-manifold or non-smooth meshes,
starting bits per vertex of quantization, a typical bit-stream size would be on the order of 20 bits
per vertex and 10 bits per triangle (smooth meshes seem to compress roughly twice as much).
The previous estimates apply to both manifold and non-manifold compression. Table 3 indicates that
when compressing a non-manifold as a non-manifold (i.e., recovering the connectivity using stitches) the
total bit-stream size can be reduced by up to 20% (21% for the tetra 2nm. wrl model). This is because
when encoding stitches, vertices that will be stitched together are encoded only once (such vertices were
duplicated during the non-manifold to manifold conversion process). The same applies to per-vertex
properties.
Table
Compression results. bpv stands for bits per vertex and bpt for bits per triangle
Model Uncompressed Number of Number of Compressed as Compressed as Non-manifold
size vertices triangles non-manifold manifold versus manifold
bytes bytes bpv bpt bytes ratio savings
Bart.wrl
Briggso.wrl 130,297 1,584 3,160 4,080 20.61 10.32 4,129 0.98 2%
Engine.wrl 4,851,671 63,528 132,807 139,632 17.58 8.41 167,379 0.83 17%
Enterprise.wrl 859,388 12,580 12,609 28,224 17.95 17.91 29,553 0.95 5%
Gen_nm.wrl 49,360 410 820 2,566 50.06 25.03 2625 0.97 3%
Lamp.wrl 254,043 2,810 5,054 3,726 10.61 5.90 3954 0.94 6%
Maze.wrl 87,391 1,412 1,504 4,235 24.0 22.53 4855 0.87 13%
Opt-cow.wrl 204,420 3,078 5,804 7,006 18.02 9.66 7,006 1 0%
Saturn.wrl 61,155 770 1,536 1,998 20.75 10.40 2,197 0.91 9%
Symmetric_brain.wrl 3,092,371 34,416 66,688 73,789 17.15 8.85 73,640 1.002 0:2%
The main results are gathered in Table 4. In Table 4, we first report the relative part (in %) of stitching
information and connectivity. We observe that stitches can have a huge impact on the bitstream (up to
125% the size of connectivity). Thus, it is worthwhile to concentrate on an efficient encoding of stitches.
We then compare the relative efficiencies of the naive method that was described in Section 1.1 (encode
a table of the repetitions such as Table 1), the stack-based method and the variable-length method. In
order to have a fair comparison we measure the number of bits per replicated vertex that are used to
encode the stitches. For the naive method, we use the formula r log n,wherer is the number of repeatedvertices, and n is the total number of vertices. Note that this formula underestimates the true formula
mlog c C r log n, and also does not model for the overhead of inserting data in a functional bitstream
(that can be decoded incrementally, etc.): the formula thus reflects a theoretical best-case prediction. For
the stack-based and variable-length methods, we report data obtained by producing bitstreams with and
without the stitching data.
As expected, the variable-length method outperforms the stack-based method, which outperforms the
naive method. In several cases, the variable-length method allows an order-of-magnitude improvement
over the (theoretical) performance of the naive method.
We also report in this paper some statistics on a set of 303 models that were used for validation
experiments during the MPEG-4 standardization process. Among 303 meshes, 162 were found to be non-
manifold. We measured the average ratio of vertex replications .r=n/ among the 303 models and found it
to be equal to 0.39: on average, 39% of the vertices are repeated. While this seems quite high, we note that
Table
Relative importance of stitches and comparison between encoders. We report the relative part (in %) of stitches and
the rest of the connectivity, and observe that stitches represent an important part. We also compare (in number of
bits per replicated vertex) the naive method (using the formula r log2 n), the stack-based encoder and the variable-length
encoder. Note that the formula underestimates the cost of the naive method, and disregards the overhead
associated to putting data in a bit-stream
Connectivity Connectivity Stitches/ Naive Stack Variable-
and stiches connectivity length
Model name nr in bytes bits/r
bart 5058 4 294 303 3% 13 22
briggso 1631 90 448 509 14% 11 7.2 5.42
engine 83583 23173 17068 38432 125% 17 10.47 7.38
lamp 3070 388 315 429 36% 12 6.02 2.35
maze 2028 1232 615 1121 82% 11 3.56 3.29
superfemur 14558 863 3699 4609 25% 14 8.50 8.44
brain 34708 548 7582 8154 8%
r is in general significantly larger than m, the number of non-manifold vertices: for instance for the model
of Fig. 2, m D 3, r D 9andr=n D 0:82. This average is kept high by a few models consisting of a majority
of non-manifold vertices. The median, computed on the 162 non-manifold meshes is r=n D 0:147: 15%
of the vertices are repeated.
In Fig. 20, we compare the efficiency of the variable-length method and the naive method for
representing the stitching information in the 162 above mentioned non-manifold meshes. Fig. 20 shows a
scatter plot of the number of bits per replicated vertex as a function of the ratio r=n. For the naive method,
we have used the formula r log n, which as discussed above represent a theoretical best-case estimate.For the variable-length method, we have plotted data obtained by producing bitstreams with and without
the stitching information. This data indicates that the variable-length method outperforms significantly
the (theoretical behavior of the) naive method.
Table
5 gathers overall encoding and decoding timings 11 . We observe a decoding speed of 10,000-
13,000 vertices per second on a commonly available 233 MHz Pentium II laptop computer. For many
meshes it has been reported that the number of triangles is about twice the number of vertices: this
Which are, perhaps, more relevant for [11,19], the present methods representing only one module.
Fig. 20. Efficiency of the variable-length method versus the naive method for encoding the vertex replications:
scatter plot obtained from 162 non-manifold models among 303 test models. The y-axis represents the number
of encoded bits per replicated vertex, while the x-axis represents the ratio of vertex replications in log-scale. For
the naive method, we have used the formula r log2 n (theoretical best-case performance). For the variable-length
method, we have plotted measured data. Based on this data, the variable-length method outperforms significantly
the naive method.
is exact for a torus, and is approximate for many large meshes with a relatively simple topology. In
this case we observe a decoding speed of 20,000-25,000 triangles per second. When considering non-manifold
meshes the assumption that the number of triangles is about twice the number of vertices
does not necessarily hold, depending on the number of singular and boundary vertices and edges of
the model (for instance, consider the Enterprise.wrl model). This is why for non-manifold meshes, or
meshes with a significant number of boundary vertices, when measuring computational complexity the
number of vertices is probably a better measure of shape complexity than the number of triangles. The
speed reported above is observed with most meshes, including meshes with one or several properties
(such as gen_nm.wrl), with the exception of meshes with fewer than 50 vertices or so, which would
not be significant for measuring per-triangle or per-vertex decompression speeds (because of various
overheads).
While these results appear to be at first an order of magnitude slower than those reported in [9], we
note that Gumhold and Strasser decode the connectivity only (which is only one functionality, and a small
portion of compressed data) and observe their timings on a different computer (175 MHz SGI/02). Also,
our decoder was not optimized so far (more on this in Section 13). Timings reported are independent of
whether the mesh is a manifold mesh or not. There is thus no measured penalty in decoding time incurred
by stitches.
Table
Encoding and decoding times in seconds measured on an IBM Thinkpad 600
computer. The stack-based method was used. The encoding times
include non-manifold to manifold conversion
Non-manifold Encoding Decoding Vertices Triangles
model CPU time in seconds Decoded/second
Bart.wrl 0.64 0.38 13,300 23,700
Briggso.wrl 0.24 0.14 11,300 22,600
Engine.wrl 12.35 7.88 8,100 16,900
Enterprise.wrl 1.29 1.12 11,200 11,300
Gen_nm.wrl 0.10 0.04 10,300 20,500
Maze.wrl
Cow.wrl 0.43 0.23 13,400 25,200
Planet0.wrl
Saturn.wrl 0.14 0.08 9,600 19,200
Superfemur.wrl 2.12 1.36 10,300 20,700
Symmetric-brain.wrl 7.34 3.20 10,800 20,800
13. Summary and future work
We have described a method for compressing non-manifold polygonal meshes that combines an
existing method for compressing a manifold mesh and new methods for encoding and decoding stitches.
These latter methods comply with a new bit-stream syntax for stitches that we have defined.
While our work uses an extension of the Topological Surgery method for manifold compression [11],
there are no major obstacles preventing the use of other methods such as [1,9,12,14,22].
13.1. Main results
We have demonstrated in this paper that compressing non-manifolds (while preserving their non-manifold
connectivity) is highly desirable, and not very costly. Non-manifold models are frequent (more
than half the models in our database). According to our experiments, non-manifold compression has
no noticeable effect on decoding complexity. Furthermore, compared with encoding a non-manifold
as a manifold, our method permits savings in the compressed bit-stream size (of up to 20%, and in
average of 8.4%), because it avoids duplication of vertex coordinates and properties. This is in addition
to achieving the functionality of compressing a non-manifold without perturbing the connectivity.
We have also demonstrated in this paper that encoding the stitching information efficiently is
important. Our results indicate that the size of the stitching information may be comparable to the size of
the connectivity. A naive method as discussed in Section 1.1 is not adequate for encoding the stitching
information. Our methods can guarantee a worst case cost of O.log m/ bits per vertex replication, mbeing the number of non-manifold vertices, while for many replications, the cost is actually logl=l bits,
where l is the length of the stitch (this is the amortized cost of encoding the length of the stitch).
We presented two different encoders: a simple encoder, and a more complex encoder that uses the
full potential of the syntax. The results we reported indicate that the additional complexity of the
variable-length encoder is justified. Other encoders may be designed in compliance with the syntax.
One particularly interesting open question is: is there a provably good optimization strategy to minimize
the number of bits for encoding stitches?
Perhaps more importantly, our methods hides completely the issues of mesh singularities to the users
of the technology. These are arguably complex issues that creators and users of 3D content may not
necessarily want to learn more about, in order to understand how the models would have to be freed
of singularities (and thus altered) in order to be properly transmitted or stored in compressed form.
The bitstream syntax and decoder described in this paper is part of the MPEG-4 standard on 3-D Mesh
Coding. Using this technology, there will be no alteration of the connectivity, whether non-manifold or
manifold.
13.2. Future work
Stitches allow more than connectivity-preserving non-manifold compression: merging components
and performing all other topological transformations corresponding to a vertex clustering are possible.
How to exploit these topological transformations using our stitching syntax (or other syntaxes) is another
open area.
The software that was used to report results in this paper was by no means optimized. Optimization
must thus be done in harmony with all the functionalities of compression (e.g., streamed and hierarchical
and will be the subject of future work. The decoder may be optimized in the following ways
(other optimizations are possible as well): (1) limiting modularity and function calls between modules,
once the functionalities and syntax are frozen; (2) optimizing the arithmetic coding, which is a bottleneck
of the decoding process (every single cycle in the arithmetic coder matters); (3) performing a detailed
analysis of memory requirements, imposing restrictions on the size of mesh connected components, and
limiting the number of cache misses in this way.
Acknowledgements
The anonymous reviewers provided excellent suggestions for improving our original draft. We also
thank G. Zhuang, V. Pascucci and C. Bajaj for providing the Brain model, and A. Kalvin for providing
the Femur model.
Fig. 21. Regular, boundary and singular (or non-manifold) vertices of a polygonal mesh.
Appendix
A. Manifold and non-manifold polygonal meshes
For our purposes, a three-dimensional polygonal mesh comprises a set of vertices fvig and a set of
faces ffj g. Each vertex has coordinates in R3. Each face is specified with a tuple of at least three vertex
indices. The face is said to be incident on such vertices. An edge is a pair of vertices listed consecutively
in at least one face.
We call connectivity of a mesh, the set of ordered subsets of indices provided by the set of faces ffj g,
modulo circular permutation. We use the word geometry to mean the set of vertex coordinates fvig.
We call the subset of faces of ffj g that share a vertex v the star of v, noted v?.Thelink of a vertex
is a graph consisting of the edges of the star of v not incident to v (see Fig. 21). A regular vertex has a
simply connected link; otherwise the vertex is a singular vertex or non-manifold vertex. We call an edge
incident on one single face a boundary edge, an edge incident on exactly two faces a regular edge,and
an edge incident on three or more faces a singular edge. A regular vertex incident to a boundary edge is
called a boundary vertex. These cases are illustrated in Fig. 21. A mesh is a manifold if each vertex is a
regular vertex; otherwise it is a non-manifold. Additional definitions (notably orientability) are provided,
for instance, in [8].
--R
Single resolution compression of arbitrary triangular meshes with properties
Boeing research staff
Optimized geometry compression for real-time rendering
Introduction to Algorithms
Efficient implementation of progressive meshes
ISO/IEC 14496-2 MPEG-4 Visual Committee Working Draft Version
Progressive coding of 3D graphics models
in: Siggraph'97 Conference
Edgebreaker: Connectivity compression for triangle meshes
Matchmaker: manifold breps for non-manifold r-sets
Wrap&Zip decompression of the connectivity of triangle meshes compressed with Edgebreaker
Geometry coding and VRML
Geometry compression through topological surgery
The Virtual Reality Modeling Language Specification
--TR
--CTR
Lexing Ying , Denis Zorin, Nonmanifold subdivision, Proceedings of the conference on Visualization '01, October 21-26, 2001, San Diego, California
Martin Isenburg , Jack Snoeyink, Compressing the property mapping of polygon meshes, Graphical Models, v.64 n.2, p.114-127, March 2002
Martin Isenburg , Jack Snoeyink, Coding polygon meshes as compressable ASCII, Proceeding of the seventh international conference on 3D Web technology, p.1-10, February 24-28, 2002, Tempe, Arizona, USA
Jeffrey Ho , Kuang Chih Lee , David Kriegman, Compressing large polygonal models, Proceedings of the conference on Visualization '01, October 21-26, 2001, San Diego, California
Christian Sohler, Fast reconstruction of Delaunay triangulations, Computational Geometry: Theory and Applications, v.31 n.3, p.166-178, June 2005
Dmitry Brodsky , Jan Baekgaard Pedersen, A Parallel Framework for Simplification of Massive Meshes, Proceedings of the IEEE Symposium on Parallel and Large-Data Visualization and Graphics, p.4, October 20-21,
Renato Pajarola , Jarek Rossignac, Compressed Progressive Meshes, IEEE Transactions on Visualization and Computer Graphics, v.6 n.1, p.79-93, January 2000
Jatin Chhugani , Subodh Kumar, Geometry engine optimization: cache friendly compressed representation of geometry, Proceedings of the 2007 symposium on Interactive 3D graphics and games, April 30-May 02, 2007, Seattle, Washington
Martin Isenburg , Stefan Gumhold, Out-of-core compression for gigantic polygon meshes, ACM Transactions on Graphics (TOG), v.22 n.3, July
Leila De Floriani , Mostefa M. Mesmoudi , Franco Morando , Enrico Puppo, Decomposing non-manifold objects in arbitrary dimensions, Graphical Models, v.65 n.1-3, p.2-22, January
Dinesh Shikhare , S. Venkata Babji , S. P. Mudur, Compression techniques for distributed use of 3D data: an emerging media type on the internet, Proceedings of the 15th international conference on Computer communication, p.676-696, August 12-14, 2002, Mumbai, Maharashtra, India
Renato Pajarola , Christopher DeCoro, Efficient Implementation of Real-Time View-Dependent Multiresolution Meshing, IEEE Transactions on Visualization and Computer Graphics, v.10 n.3, p.353-368, May 2004 | geometry compression;stitching;non-manifold;polygonal mesh |
319424 | LOD-sprite technique for accelerated terrain rendering. | We present a new rendering technique, termed LOD-sprite rendering, which uses a combination of a level-of-detail (LOD) representation of the scene together with reusing image sprites (previously rendered images). Our primary application is accelerating terrain rendering. The LOD-sprite technique renders an initial frame using a high-resolution model of the scene geometry. It renders subsequent frames with a much lower-resolution model of the scene geometry and texture-maps each polygon with the image sprite from the initial high-resolution frame. As it renders these subsequent frames the technique measures the error associated with the divergence of the view position from the position where the initial frame was rendered. Once this error exceeds a user-defined threshold, the technique re-renders the scene from the high-resolution model. We have efficiently implemented the LOD-sprite technique with texture-mapping graphics hardware. Although to date we have only applied LOD-sprite to terrain rendering, it could easily be extended to other applications. We feel LOD-sprite holds particular promise for real-time rendering systems. | INTRODUCTION
As scene geometry becomes complex (into the millions of poly-
gons), even the most advanced rendering hardware cannot provide
interactive rates. Current satellite imaging technology provides terrain
datasets which are well beyond this level of complexity. This
presents two problems for real-time systems: 1) the provided frame
rate may be insufficient, and 2) the system latency may be too high.
Much of real-time computer graphics has been dedicated to finding
ways to trade off image quality for frame rate and/or system latency.
Many recent efforts fall into two general categories:
Level-of-detail (LOD): These techniques model the objects in the
scene at different levels of detail. They select a particular LOD for
each object based on various considerations such as the rendering
cost and perceptual contribution to the final image.
Image-based modeling and rendering (IBMR): These techniques
model (some of the) objects in the scene as image sprites.
These sprites only require 2D transformations for most rendering
operations, which, dependingon the object, can result in substantial
time savings. However, the 2D transformations eventually result in
distortions which require the underlying objects to be re-rendered
from their full 3D geometry. IBMR techniques typically organize
1 Department of Computer Science, State University of New
York at Stony Brook, Stony Brook, NY 11794-4400, USA. Email:
Virtual Reality Laboratory, Naval Research Laboratory Code 5580,
swan@acm.org, ekuo@homemail.com
Culling
Object
Quality
LOD Renderer
IBMR Renderer
Output
Image
Output
Image
Figure
1: Traditional hybrid LOD and IBMR techniques render
each object either as a sprite or at a certain level of detail.
Culling
Object
Quality
Output
Image
LOD
IBMR
Renderer
Figure
2: The LOD-sprite technique renders each object as both a
sprite and as a geometric object at a certain level of detail.
the scene into separate non-occluding layers, where each layer consists
of an object or a small group of related objects. They render
each layer separately, and then alpha-channel composite them.
Some hybrid techniques use both multiple LODs and IBMR methods
[16, 27, 22]. A general pipeline of these techniques is shown
in
Figure
1. Each 3D object is first subjected to a culling operation.
Then, depending upon user-supplied quality parameters, the system
either renders the object at a particular LOD, or it reuses a cached
sprite of the object.
This paper presents the LOD-sprite rendering technique. As
shown in Figure 2, the technique is similar to previous hybrid techniques
in that it utilizes view frustum culling and a user-supplied
quality metric. Objects are also modeled as both LOD models and
sprites. However, the LOD-sprite technique differs in that the 2D
sprite is coupled with the LOD representation; the renderer utilizes
both the LOD and the sprite as the inputs to create the output im-
age. The LOD-sprite technique first renders a frame from high-resolution
3D scene geometry, and then caches this frame as an
image sprite. It renders subsequent frames by texture-mapping the
cached image sprite onto a lower-resolution representation of the
scene geometry. This continues until an image quality metric requires
again rendering the scenefrom the high-resolution geometry.
We have developed the LOD-sprite technique as part of the rendering
engine for a real-time, three-dimensional battlefield visualization
system [9]. For this application the terrain database consumes
the vast majority of the rendering resources, and therefore in
this paper our focus is on terrain rendering. However, LOD-sprite
is a general-purpose rendering technique and could certainly be applied
to many different types of scene geometry.
The primary advantage of LOD-sprite over previous techniques
is that when the sprite is transformed, if the 2D transformation is
within the context of an underlying 3D structure (even if only composed
of a few polygons), a much larger transformation can occur
before image distortions require re-rendering the sprite from the full
3D scene geometry. Thus, the LOD-sprite technique can reuse image
sprites for a larger number of frames than previous techniques.
In addition, because the sprite preserves object details, a lower LOD
model can be used for the same image quality. These properties allow
interactive frame rates for larger scene databases.
The next section of this paper places LOD-sprite in the context of
previous work. Section 3 describes the LOD-sprite technique itself.
Section 4 presents the results of our implementation of LOD-sprite.
The previous work which is most closely related to LOD-sprite can
be classified into image-based modeling and rendering techniques
and level-of-detail techniques. We first revisit and classify previous
IBMR techniques while also considering LOD techniques, and then
focus on some hybrid techniques.
2.1 Image-Based Modeling and Rendering
Previous work in image-basedmodeling and rendering falls primarily
into three categories:
(1) The scene is modeled by 2D image sprites; no 3D geometry
is used. Many previous techniques model the 3D scene
by registering a number of static images [2, 18, 19, 26]. These
techniques are particularly well-suited for applications where photographs
are easy to take but modeling the scene would be difficult
(outdoor settings, for example). Novel views of the scene are created
by 2D transforming and interpolating between images [3, 18].
By adding depth [17] or even layered depth [23] to the sprites, more
realistic navigation, which includes limited parallax, is possible.
Another category samples the full plenoptic function, resulting in
3D, 4D or even 5D image sprites [13, 10], which allow the most
unrestricted navigation of this class of techniques. However, all
of these techniques lack the full 3D structure of the scene, and so
restrict navigation to at least some degree.
(2) The scene is modeled using either 3D geometry or
2D image sprites. Another set of previous techniques model
each object with either 3D geometry or a 2D image sprite, based
on object contribution to the final image and / or viewing direction
[5, 16, 20, 21, 22, 27]. The LOD-sprite technique differs from
these techniques in that it integrates both 3D geometry and 2D image
sprites to model and render objects.
(3) The scene is modeled using a combination of 3D geometry
and 2D image sprites. There are a group of techniques
which add very simple 3D geometry to a single 2D image
[6, 7, 12, 24], which guides the subsequent image warping. Debevec
et al. [7] construct a 3D model from reference images, while
Sillion et al. [24] and Darsa et al. [6] use a textured depth mesh
which is constructed and simplified from depth information. In
general, using a depth mesh with projective texture mapping gives
better image quality than using depth image warping [17], because
the mesh stretches to cover regions where no pixel information is
available, and thus no holes appear. The main advantage of adding
3D scene geometry to the image is that it allows the warping to approximate
parallax, and therefore increases the range of novel views
which are possible before image distortion becomes too severe.
Our LOD-sprite is most closely related to the techniques of Cohen
et al. [4] and Soucy et al. [25]. Both create a texture map from
a 3D object represented at a high geometric resolution, and then
subsequently represent the object at a much lower geometric reso-
lution, but apply the previously created texture map to the geometry.
However, the LOD-sprite technique generates texture maps (image
sprites) from images rendered at run-time, while these techniques
generate the texture map from the object itself.
2.2 Level-of-Detail
There is a large body of previous work in level-of-detail (LOD)
techniques, which is not reviewed here. The general LOD-sprite
technique requires that geometric objects be represented at various
levels of detail, but it does not require any particular LOD representation
or technique (although a specific implementation of LOD-
sprite will need to access the underlying LOD data structures).
This paper does not cover how to create LOD representations of
a terrain - there exist numerous multiresolution representations for
height fields. Lindstrom et al. [14] and Hoppe [11] represent the
most recent view-dependent terrain LOD methods, and Luebke and
Erikson [15] can also be adapted for terrain datasets. In this paper
we adopt the technique of Lindstrom et al. [14]. This algorithm
organizes the terrain mesh into a hierarchical quadtree structure. To
decide which quadrant level to use, the algorithm computes a screen
space error for each vertex, and compares it to a pre-defined error
threshold. This error measures the pixel difference between the full-resolution
and lower-resolution representations of the quadrant.
2.3 Accelerated Virtual Environment Navigation
As stated above, many LOD and IBMR techniques have been applied
to the problem of accelerating virtual environment naviga-
tion. Of these, LOD-sprite is most closely related to the techniques
of Maciel and Shirley [16], Shade et al. [22], Schaufler and Stuerzlinger
[21], and Aliaga [1]. All of these papers present similar
hybrid LOD/IBMR techniques. They create a hierarchy of image
sprites based on a space partition of the scene geometry. In subsequent
frames, for each node the techniques either texture map
the node sprite onto a polygon, or re-render the node's 3D geometry
if an error metric is above a threshold. Each reused image
sprite means an entire subtree of 3D geometry need not be ren-
dered, which yields substantial speedup for navigating large virtual
environments. The main limitation of these techniques is that creating
a balanced space partition is not a quick operation, and it must
be updated if objects move. Also, to avoid gaps between neighboring
partitions, they either maintain a fairly large amount of overlap
between partitions [22], or they morph geometries to guarantee a
smooth transition between geometry and sprite [1]; both operations
add storage and computational complexity. LOD-sprite differs from
these techniques in that they interpolate the image sprite on a single
2D polygon, while LOD-sprite interpolates the image sprite on a
coarse representation of the 3D scene geometry.
3.1 Algorithm
The general idea of the LOD-sprite technique is to cache the rendered
view of a high-resolution representation of the dataset. We
refer to this image as a sprite, and the frame where the sprite is
created as a keyframe. LOD-sprite renders subsequent frames, referred
to as novel views, at a lower resolution, but applies the sprite
as a texture map. LOD-sprite measures the error caused by the divergence
of the viewpoint from the keyframe as each novel view is
rendered. When this error exceeds a threshold, LOD-sprite renders
a new keyframe.
Pseudocode for the LOD-sprite algorithm is given in Figure 3.
Lines 1 and 5 generate a sprite image from high-resolution scene
geometry. This is necessary whenever the viewer jumps to a new
viewpoint position (line 1), and when LOD-sprite generates a new
render sprite image from high-resolution
scene geometry at viewpoint vp
2 for each novel viewpoint vp
4 if error ? threshold then
5 render sprite image from high-resolution
scene geometry at viewpoint vp
set of low-resolution scene
geometry polygons
7 for each poly
8 if WasVisible( poly, sprite ) then
9 render poly, map with sprite
else
render poly, map with original texture map
Figure
3: Pseudocode for the LOD-Sprite algorithm.
(line 5). At line 2 the algorithm processes each novel
viewpoint. Lines 3 and 4 measure the error associated with how
far the current viewpoint diverges from viewpoint at the time when
the sprite was rendered; the procedure ErrorMetric is described in
Section 3.2. At line 6 the algorithm prepares to render the frame
at the current viewpoint by gathering a set of polygons from a
low-resolution version of the scene geometry. Line 7 considers
each polygon. Line 8 determines, for each low-resolution poly-
gon, whether the polygon was visible when the sprite image was
taken. This routine, WasVisible (described in Section 3.3), determines
whether the polygon is texture mapped with the sprite texture
(line or the original texture map (line 11).
The sprite data structure holds both the sprite texture map and the
viewing parameters; LOD-sprite uses both to map polygons
with the sprite texture in line 9. Creating a new sprite (lines
1 and 5) requires copying the frame buffer into texture memory,
which is efficiently implemented with the OpenGL glCopyTexIm-
age2D function.
Texture mapping a keyframe could be achieved using projective
texture mapping: a light placed at the keyframe camera position
projects the sprite image onto the scene geometry. However, our
implementation of LOD-sprite does not use projective texture map-
ping, because the current OpenGL implementation does not test for
polygon visibility. Occluded polygons in the keyframe are mapped
with wrong textures when they become visible. Therefore, our implementation
detects polygon visibility on its own (line 8), and applies
a different texture map dependingon each polygons' visibility
(lines 9 and 11).
3.2
LOD-sprite decides when to render a new keyframe based on an
error metric which is similar to that described by Shade et al. [22].
Figure
4 gives the technique, which is drawn in 2D for clarity. Consider
rendering the full-resolution dataset from viewpoint position
v1 . In this case the line segments AC and CB are rendered (in
3D these are polygons). From this view, the ray passing through
vertex C intersects the edge AB at point C 0 . After rendering the
full-resolution dataset, the image from v1 is stored as a texture map.
Now consider rendering the scene from the novel viewpoint v2 , using
the low-resolution representation of the dataset. In this case the
line segment AB is rendered, and texture mapped with the sprite
rendered from v1 . Note that this projects the vertex C to the position
C 0 on AB. From v1 this projection makes no visible differ-
ence. However, from v2 , vertex C 0 is shifted by the angle ' from
its true location C . This angle can be converted to a pixel distance
on the image plane of view v2 , which is our measure of the error of
rendering point C from view v2 :
where fi is the view angle of a single pixel (e.g., the field-of-view
over the screen resolution), and ffl is a user-specified error threshold.
As long as Equation 1 is true, we render using the sprite from the
most recent keyframe (e.g., line 5 in Figure 3 is skipped). Once
Equation 1 becomes false, it is again necessary to render from the
full-resolution dataset (e.g., line 5 in Figure 3 is executed).
a
A
a
Figure
4: Calculating the error metric.
Theoretically, we should evaluate Equation 1 for all points in the
high-resolution dataset for eachnovel view. Clearly this is impracti-
cal. Instead, our implementation calculates ' for the central vertex
of each low-resolution quadtree quadrant. The resolution of each
quadrant is determined by the number of levels we traverse down
into the quadtree that is created by our LOD algorithm [14]. We
calculate the central vertex by averaging the four corner vertices of
the quadrant. To calculate ', we have to know the point C 0 . We calculate
by interesting the vector v1C with the plane spanned by
the estimated central vertex and two original vertices of the quad-
rant. Once we know C 0 , we calculate ' from the dot product of the
vectors v2C 0 and v2C .
We next calculate the average sum of squares of the error for all
evaluated quadrants and compare this with (fi \Delta
where n is the number of low-resolution quadrants. When this test
fails, line 5 in Figure 3 is executed.
3.3 Visibility Changes
As the viewpoint changes, polygons which were originally occluded
or culled by the view frustum may become visible. Figure 5
illustrates this problem. Let the two objects represent mountains.
The light shaded region of the back mountain indicates occluded
polygons in the keyframe, while the heavy shaded regions in both
mountains show polygons culled by the view frustum. If these regions
become visible in a novel view, there will be no sprite texture
to map on them. Our solution is to map them with the same texture
map we use to generate the keyframe.
We classify the visibility of each polygon with a single pass over
all vertices of the low-resolution geometry. This loop is part of the
process of generating a new keyframe. For novel views, the visibility
of each polygon to the sprite is already flagged. This visibility
Keyframe viewport Novel frame viewport
Figure
5: The originally occluded or view frustum culled objects
may become visible.
flag controls which texture map is used for the polygon (and thus
line 8 in Figure 3 is a fast table look-up). OpenGL determines the
visibility of each polygon from the novel viewpoint using the hardware
z-buffer.
In our implementation, the terrain is represented by a triangle
mesh. We determine the visibility of each low-resolution triangle
using the keyframe viewing parameters and the keyframe z-
buffer. Our visibility determination for each triangle is binary,
which means we consider a partially occluded triangle to be fully
occluded. We do not attempt to subdivide partially occluded tri-
angles, because achieving this would require clipping the triangle
into visible and invisible sub-triangles [8]. This would not only be
expensive, but would also generate too many small triangles.
To accurately detect visibility, we should scan-convert the whole
triangle and detect the visibility of every pixel. This is obviously
too expensive. Instead, we only perform this detection for the three
triangle vertices. Only when all three vertices are visible do we
flag the triangle as visible. Of course, this fails for triangles with
unoccluded vertices but which are nevertheless partially occluded
(e.g., a part of an edge and interior could be occluded). Such triangle
will be erroneously flagged as visible. However, with terrain
datasets this rarely occurs, since the projections of background triangles
tend to be much smaller than foreground triangles.
We use the z-buffer to determine the visibility of each vertex.
When we calculate a keyframe, we store both the z-buffer and the
viewing matrix. Then, for each vertex, we calculate the (x; y)
screen coordinate and the z-depth value with the keyframe viewing
matrix. We compare this depth value to the z value at location
(x; y) in the z-buffer. This tells us whether the vertex is occluded
in the keyframe.
This raises several implementation issues. The first is that a vertex
is usually not projected onto an integer grid point in the z-buffer.
Using the z-buffer value at the closest grid position does not always
give the correct visibility, because that z value could represent a
neighboring triangle. Interpolating between neighboring z values is
also inappropriate, because they could represent disconnected ob-
jects. The second issue is that the LOD mesh is not static - we
compare the low-resolution geometry to the z-buffer rendered from
the high-resolution geometry.
Although it does not solve either of these problems, we have
obtained good results in practice by using the following equation to
determine visibility:
where Z vertex is the calculated z value of the vertex, Z buffer is the z-buffer
value at the closet grid point, and ffl is the specified 'thickness'
of the visible surface. When Equation 3 is true we flag the vertex
as visible.
3.4 Implementation Notes
To further enhance rendering time, we have tried to optimize our
implementation for the graphics hardware. For each frame, we
need two texture maps - the original texture map and the current
keyframe - to map all of the visible polygons. It is much too costly
to load the appropriate map into texture memory on a per-polygon
basis. Instead, we load both maps into texture memory, and scale
the calculated texture coordinates so that each polygon accesses the
correct map. In addition, we use triangle strips as our rendering
primitive. The drawback of this primitive is that we can only apply
one texture map to the whole strip. For strips which contain both
visible and invisible triangles, we can only use the original texture
map.
Results are shown in Figures 6 and 7. The input is a 512 \Theta 512
height field and 512 \Theta 512 texture map. Figures 6a-e compare the
LOD-sprite technique to a standard LOD technique [14]. Figure 6a
shows a terrain dataset rendered from a low-resolution LOD decomposition
containing 1,503 triangles, while Figure 6b shows the
same terrain rendered from a high-resolution decomposition with
387,973 triangles. Both figures use the same texture map. Comparing
6a to 6b, we see that, as expected, many surface features are
smoothed out. Figure 6c shows the same view rendered with the
LOD-sprite technique, using the same 1,503 triangles as Figure 6a
but texture mapped with Figure 6b. Unlike Figure 6a the surface
features are quite well preserved, yet Figures 6a and 6c take the
same amount of time (10 milliseconds) to render. Figures 6d and
6e give difference images; Figure 6d gives the absolute value of the
difference between the high and low resolution images, while Figure
6e between the high and LOD-sprite images. Figures 6d and e
clearly show the image-quality advantage of the LOD-sprite tech-
nique. Notice, however, the bright band along the silhouette, both
against the horizon as well as the edge of the dataset in the lower
left-hand corner of the images. These appear because our LOD decomposition
[14] is not sensitive to the edge of the dataset or to
silhouette edges.
Figure 7a-e show similar results but are rendered from a view-point
over the mountains, looking down onto the plain beyond. In
this figure note that the close mountains appear very similar at low
resolution (a), high resolution (b), and with the LOD-sprite technique
(c). This is because these mountains are so close that even at
a high resolution the polygons are large, and the LOD decomposition
keeps these polygons at full resolution. The difference images
(Figures 7d and e) also demonstrate this. The comments regarding
the silhouette edge given above also apply to this figure, although
in this case the entire silhouette edge is also the edge of the data.
Figures
9-14 give the algorithm's timing behavior for the camera
path shown in Figure 8. The camera starts away from the terrain,
zooms in, flies over a plain, and then over a mountain range and
onto the plain beyond. This path visits most of the interesting topological
features of this dataset. The animation contains 600 frames
for all the figures except for Figure 12, where the frame count is
varied. Each frame was rendered at a resolution of 512 \Theta 512 on
an SGI Onyx 2 with 6 195MHz MIPS processors and Infinite Reality
graphics. We rendered the same animation for three different
runs: 1) using a high-resolution LOD decomposition, 2) using a
low-resolution LOD decomposition, and 3) using the LOD-sprite
technique. The LOD-sprite technique used the same settings as the
high-resolution run for keyframes, and the same settings as the low-resolution
run for the other frames.
Figure
9 shows how the number of triangles changes as each
frame is rendered. The low-resolution and LOD-sprite runs have
identical triangle counts, except at the keyframes. The high-
Figure
8: The camera path for Figures 9-14.
resolution run requires about 2 orders of magnitude more triangles.
The semi-log plot shows that both triangle counts have a similar
variation as the animation progresses.
Figure
shows how the LOD-sprite error (Section 3.2) changes
as each frame is rendered. The error always starts from zero for a
As more novel views are interpolated from the keyframe,
the error increases. When the error exceeds 1:0 pixels, we calculate
another keyframe from the high-resolution scene geometry, which
again drops the error to zero.
Figure
11 shows the amount of time required to render each
frame. The high-resolution time runs along the top of the graph,
at an average of 526 milliseconds per frame. The low-resolution
time runs along the bottom, at an average of 22 milliseconds per
frame. The rendering time for the LOD-sprite frames follows the
low-resolution times, except when a new keyframe is rendered. For
this animation the system generated 16 keyframes, at an average
time of 680 milliseconds per keyframe. The great majority of the
LOD-sprite frames are shown near the bottom of the graph; these
took an average of 36 milliseconds to render. The overall average
for LOD-sprite was 53 milliseconds per frame.
Figure
12 shows the fraction of the total number of rendered
frames which are keyframes. This is plotted against the total number
of frames rendered for the path shown in Figure 8. As expected,
as more frames are rendered for a fixed path, the distance moved
between each frame decreases, and so there is more coherence between
successive frames. This figure shows how our system takes
advantage of this increasing coherence by rendering a smaller fraction
of keyframes. This figure also illustrates a useful property of
the LOD-sprite technique for real-time systems: as the frame up-date
rate increases, the LOD-sprite technique becomes even more
efficient in terms of reusing keyframes.
Figure
also shows the fraction of the total number of rendered
frames which are keyframes, but this time plots the fraction against
the error threshold in pixels. As expected, a larger error threshold
means fewer keyframes need to be rendered. However, the shape
of this curve indicates a decreasing performance benefit as the error
threshold exceeds about 1:0 pixel. For a given dataset and a
path which is representative of the types of maneuvers the user is
expected to make, this type of analysis can help determine the best
error threshold versus performance tradeoff.
The LOD-sprite technique results in a substantial speedup over
rendering a full-resolution dataset. Rendering 600 frames of the
full-resolution dataset along the path in Figure 8 takes 316 seconds.
Rendering the same 600 frames with the LOD-sprite technique, using
an error threshold of 1:0 pixel, takes
Frame
Number of Triangles
High-Resolution
LOD-sprite
Figure
9: The number of triangles as a function of the frame
number on a semi-log plot. (600 frames; path from Figure 8.)
Frame Number0.20.611.4
Figure
10: The error in pixels as a function of the frame number
for the LOD-sprite run. (600 frames; path from Figure 8.)
of 9.9.
Figure
14 shows how the speedup varies as a function of the
error threshold.
5 CONCLUSIONS AND FUTURE WORK
This paper has described the LOD-sprite rendering technique, and
our application of the technique to accelerating terrain rendering.
The technique is a combination of two rich directions in accelerated
rendering for virtual environments: multiple level-of-detail (LOD)
techniques, and image-basedmodeling and rendering (IBMR) tech-
niques. It is a general-purpose rendering technique that could accelerate
rendering for any application. It could be built upon any
LOD decomposition technique. It improves the image quality of
LOD techniques by preserving surface complexity, and it improves
the efficiency of IBMR techniques by increasing the range of novel
views that are possible. The LOD-sprite technique is particularly
well-suited for real-time system architectures that decompose the
Frame Number10030050070090011001300Rendering Time (msec)
High-Resolution
Low-Resolution
LOD-Sprite
Figure
11: The rendering time in milliseconds as a function
of frame number. (600 frames; path from Figure 8.)
Total Number of Frames0.010.030.050.07Fraction of Keyframes
Figure
12: The fraction of keyframes as a function of the total
number of frames rendered. (Path from Figure 8.)
scene into coherent layers.
Our primary applied thrust with this work is to augment the rendering
engine of a real-time, three-dimensional battlefield visualization
system [9]. As this system operates in real-time, our most
important item of future work is to address the variable latency
caused by rendering the keyframes. One optimization is to use a
dual-thread implementation, where one thread renders the keyframe
while another renders each LOD-sprite frame. Another optimization
is to render the keyframe in advance by predicting where the
viewpoint will be when it is next time to render a keyframe. We can
predict this by extrapolating from the past several viewpoint loca-
tions. Thus, we can begin rendering a new keyframe immediately
after the previous keyframe has been rendered. If the system makes
a bad prediction (perhaps the user makes a sudden, high-speed ma-
neuver), two solutions are possible: 1) we could use the previous
keyframe as the sprite for additional frames of LOD-sprite render-
ing, with the penalty that succeedingframes will have errors beyond
the normal threshold. Or, 2) if the predicted viewpoint is closer to
the current viewpoint than the current viewpoint is to the previous
keyframe, we can use the predicted viewpoint as the keyframe in-
Threshold (pixels)0.050.150.25
Fraction of Keyframes
Figure
13: The fraction of keyframes as a function of error
threshold. (600 frames; path from Figure 8.)
Threshold (pixels)481216
Figure
14: Speedup as a function of error threshold. 600
frames. (Path from Figure 8.)
stead. We are also considering implementing a cache of keyframes,
which would accelerate the common virtual environment navigation
behavior of moving back and forth within a particular viewing
region. Issues include how many previous keyframes to cache, and
choosing a cache replacement policy.
The continuous LOD algorithm [14] in our implementation is
well-suited for our application of real-time terrain rendering. How-
ever, the low-resolution mesh generated by this technique does not
preserve silhouette edges, which as demonstrated in Figures 6 and
7, forces us to use the original texture map along the silhouette.
Another problem with many continuous-LOD techniques (includ-
ing [14]) is the artifact caused by sudden resolution changes, which
results in a continuous popping effect during real-time flythroughs.
The solution to this artifact is geomorphing, where the geometry is
slowly changed over several frames. To address both of these issues
we are currently integrating the LOD technique of Luebke and
Erikson [15], which preserves silhouette edges and provides a nice
framework for evaluating geomorphing techniques.
Finally, an important limiting factor for the performance of the
LOD-sprite technique, as well as other image-based modeling and
rendering techniques (e.g., [22]), is that OpenGL requires texture
maps to have dimensions which are powers of 2. Thus, many texels
in our texture maps are actually unused. The LOD-sprite technique
could be more efficiently implemented with graphics hardware that
did not impose this constraint.
ACKNOWLEDGMENTS
We acknowledge the valuable contributions of Bala Krishna
Nakshatrala for bug fixes and various improvements to the code,
for re-generating the animations, and for help in preparing the
graphs. This work was supported by Office of Naval Research
grants N000149710402 and N0001499WR20011, and the National
Science Foundation grant MIP-9527694. We acknowledge Larry
Rosenblum for advice and direction during this project.
--R
Visualization of complex models using dynamic texture-based simplification
Quicktime VR - an image-based approach to virtual environment navigation
View interpolation for image synthesis.
Navigating static environments using image-space simplification and morphing
Modeling and rendering architecture from photographs: A hybrid geometry- and image-based approach
Efficient view-dependent image-based rendering with projective texture- mapping
The lumigraph.
Smooth view-dependent level-of-detail control and its application to terrain rendering
Tour into the picture: Using a spidery mesh interface to make animation from a single image.
Light field rendering.
Real-Time, continuous level of detail rendering of height fields.
Visual navigation of large environments using textured clusters.
Plenoptic modeling: An image-based rendering system
Priority rendering with a virtual reality address recalculation pipeline.
A three dimensional image cache for virtual reality.
Layered depth images.
Efficient impostor manipulation for real-time visualization of urban scenery
A texture-mapping approach for the compression of colored 3D triangulations
Video mosaics for virtual environments.
Commodity Real-time 3D graphics for the PC
--TR
View interpolation for image synthesis
Priority rendering with a virtual reality address recalculation pipeline
Visual navigation of large environments using textured clusters
QuickTime VR
Plenoptic modeling
Modeling and rendering architecture from photographs
Light field rendering
The lumigraph
Hierarchical image caching for accelerated walkthroughs of complex environments
Real-time, continuous level of detail rendering of height fields
Talisman
Visualization of complex models using dynamic texture-based simplification
Post-rendering 3D warping
Navigating static environments using image-space simplification and morphing
View-dependent simplification of arbitrary polygonal environments
Tour into the picture
Appearance-perserving simplification
Multiple-center-of-projection images
Layered depth images
Smooth view-dependent level-of-detail control and its application to terrain rendering
Battlefield visualization on the responsive workbench
A Real-Time Photo-Realistic Visual Flythrough
--CTR
Yadong Wu , Yushu Liu , Shouyi Zhan , Xiaochun Gao, Efficient view-dependent rendering of terrains, No description on Graphics interface 2001, p.217-222, June 07-09, 2001, Ottawa, Ontario, Canada
Huamin Qu , Ming Wan , Jiafa Qin , Arie Kaufman, Image based rendering with stable frame rates, Proceedings of the conference on Visualization '00, p.251-258, October 2000, Salt Lake City, Utah, United States
Alexandre Passos , Richard Simpson, Developing 3-d animated applications prototypes in the classroom, Journal of Computing Sciences in Colleges, v.17 n.5, p.132-139, April 2002
Jrgen Dllner , Konstantin Baumman , Klaus Hinrichs, Texturing techniques for terrain visualization, Proceedings of the conference on Visualization '00, p.227-234, October 2000, Salt Lake City, Utah, United States
Multi-Layered Image Cache for Scientific Visualization, Proceedings of the IEEE Symposium on Parallel and Large-Data Visualization and Graphics, p.9, October 20-21,
Trker Yilmaz , Uur Gdkbay , Varol Akman, Modeling and visualization of complex geometric environments, Geometric modeling: techniques, applications, systems and tools, Kluwer Academic Publishers, Norwell, MA, 2004 | virtual reality;terrain rendering;texture mapping;virtual environments;acceleration techniques;image-based modeling rendering;level of detail;multi-resolution |
319435 | High performance presence-accelerated ray casting. | We present a novel presence acceleration for volumetric ray casting. A highly accurate estimation for object presence is obtained by projecting all grid cells associated with the object boundary on the image plane. Memory space and access time are reduced by run-length encoding of the boundary cells, while boundary cell projection time is reduced by exploiting projection templates and multiresolution volumes. Efforts have also been made towards a fast perspective projection as well as interactive classification. We further present task partitioning schemes for effective parallelization of both boundary cell projection and ray traversal procedures. Good load balancing has been reached by taking full advantage of both the optimizations in the serial rendering algorithm and shared-memory architecture. Our experimental results on a 16-processor SGI Power Challenge have shown interactive rendering rates for 2563 volumetric data sets at This paper describes the theory and implementation of our algorithm, and shows its superiority over the shear-warp factorization approach. | Introduction
An effective approach to achieve high frame rates for volume rendering
is to parallelize a fast rendering algorithm that relies on some
algorithmic optimizations [1, 2, 3, 4]. Two requirements must be
met for this approach to achieve interactive rendering. First, the
serial volume rendering algorithm must be fast enough. Second,
the parallel version of the serial algorithm must scale well as the
number of processors increases.
Many parallel volume rendering algorithms have been developed
by optimizing serial volume renderers. Among the most efficient
ones is Lacroute's [3], a real-time parallel volume rendering algorithm
on a multiprocessor SGI Challenge using the shear-warp factorization
[5], which could render a 256 3 volume data set at over
Hz. A dynamic task stealing scheme was borrowed from [1]
for load balancing. Parker et al. [4] proposed another interactive
parallel ray casting algorithm on SGI workstations. Using 128 pro-
cessors, their algorithm rendered a 1GByte full resolution Visible
Woman data set at over 10 Hz. One of their optimizations for ray
casting was using a multi-level spatial hierarchy for space leaping.
In this paper, we explore the effective parallelization of our
boundary cell-based ray casting acceleration algorithm on multipro-
cessors. The serial algorithm is derived from the acceleration technique
of bounding-boxes. This technique consists of three steps:
y Email:bryson@nas.nasa.gov
First, the object is surrounded with tightly fit boxes or other easy-
to-intersect geometric primitives such as spheres. Then, the intersection
of the rays with the bounding object is calculated. Finally,
the actual volume traversal along each ray commencesfrom the first
intersection point as opposed to starting from the volume bound-
ary. Unlike other kinds of presence acceleration techniques which
traverse a hierarchical data structure, such as octrees [4, 6] and K-
d trees [7] to skip over empty regions, this approach directly and
hence more quickly traverses the original regular grid.
Obviously, the effectiveness of a bounding-boxes approach depends
on its ability to accurately calculate the intersection distance
for each viable ray with minimal computational overhead. There-
fore, in our previously proposed boundary-cell based ray casting
method [8], we accurately detected the object boundary at each grid
cell of the volumetric data set. Each cell was the volume contained
within the rectangular box bounded by eight neighboring grid vertices
(voxels). The distance information from the object boundary
to the image plane was obtained by projecting all boundary cells
(cells pierced by the object boundary) onto the image plane. This
projection procedure was accelerated both by exploiting the coherence
of adjacent cells and employing a generic projection template.
The experimental results showed that the projection time was faster
than that of the PARC (Polygon Assisted Ray Casting) algorithm
[9] which was accelerated by graphics hardware.
However, our previously proposed method [8] had some limi-
tations. First, it was more effective for small volume data of less
than 128 3 voxels. Second, it only supported fast ray casting with
parallel projection. In this paper, we present an improved version to
solve these problems. We propose to run-length encode the detected
boundary cells. This data compression reduces both memory space
and access time. Multiresolution volumes are further exploited to
reduce the number of boundary cells, so that our method is capable
of rendering larger volumes at interactive rates. Efforts have also
been made towards a fast perspective projection as well as interactive
classification.
Based on such an improved serial rendering algorithm, we have
developed our parallel rendering algorithm using effective task partitioning
schemes for both boundary cell projection and subsequent
ray traversal procedures. Good load balancing has been reached
by taking full advantage of both the optimizations in the rendering
algorithm and shared-memory architecture.
Our parallel algorithm has been implemented on a Silicon
Graphics Power Challenge, a bus-based shared-memory MIMD
(Multiple Instruction, Multiple Data) machine with processors.
Rendering rates for 256 3 volumetric data sets are as fast as
among the fastest reported. A detailed comparison between
our algorithm and shear-warp factorization approach [3] is
given in Section 5. It is difficult to compare performances between
the method in [4] and ours, since the former used a much larger
data set and eight times more processors. Yet, these two methods
do have some similarities - both are essentially ray casting algorithms
running on multiprocessors, and both are interested in the
object boundary. One significant difference lies in that their method
could only display the boundary surface of the object, while ours
can also visualize the interior structures with translucency. The description
of our serial algorithm and its parallel version are given in
Sections, 2 and 3 respectively. Performance results are reported in
Section 4.
2 The Serial Algorithm
Our serial rendering algorithm can be completed in three steps: (1)
run-length encode the boundary cells at a preprocessing stage; (2)
project the run-length encoded boundary cells onto the image plane
to produce the intersection distance values for each pixel; and (3)
for each viable ray that intersects an object in the volume, start sam-
pling, shading and compositing from the intersection. A discussion
on support of interactive volume classification between renderings
is given at the end of this section.
2.1 Boundary Cell Encoding
Since boundary cell information is viewpoint-independent, we can
obtain it by scanning the volume in an off-line preprocessing stage.
Essentially, the scanline-based run-length encoding scheme exploits
the 1D spatial coherence which exists along a selected axis
direction [10]. It gives a kn 2 compressed representation of the data
in a n 3 grid, where the factor k is the mean number of runs. A run
is a maximal set of adjacent voxels having the same property, such
as having the same scalar field value, or associated with the same
classified material (see Section 2.4 for interactive classification).
Obviously, only when k is low can such a scheme be efficient. For-
tunately, this is true for a classified volume: a volume to which an
opacity transfer function has been applied [5]. We use this scheme
to encode boundary cells in the volume. In our algorithm, each run
is a maximal set of adjacent boundary cells in the same grid cell
scanline aligned with a selected axis. X axis is selected for run-length
encoding in this paper.
The specific data structure we use for run-length encoding of
boundary cells includes a linear run list L and a 2D table T (see
Figure
1). List L contains all the runs of boundary cells. Each
element L[t] of L represents a run, including the location of the first
boundary cell C(i; j; k) of this run and the run length. The position
of a cell C(i; j; k) is determined by one of its eight voxels with the
lowest X;Y; Z coordinate value Accordingly, cell C(i; j;
is the ith cell in scanline (j; k). Table T records the distribution
information of the boundary cells among volume scanlines. Each
element T [j; k] holds the number of the boundary cells located in
scanline (j; k). According to the information in table T and list L,
we can quickly skip over empty scanlines and empty runs.
In order to reach a high data compression, we suggest that, first,
for each run t in list L, only the X coordinate i of the starting cell
needs to be stored in L[t], instead of all three coordinates.
We can easily infer the other two coordinates. Second, all boundary
cells which have 6 face-connected boundary cells should be ignored
as non-boundary cells in our data structure, because they have no
contribution for our object boundary estimation. Third, table T can
also be run-length encoded. Each run is a maximal set of adjacent
elements having the same number.
The space complexity S of our run-length encoding data structure
is the sum of the space complexities of list L and table
where k is the mean number of runs per scanline. Two fields are
needed for each element L[t] and one for T [j; k]. These fields can
scanline (j,
cell ( ,j,k)
list L0 t t+1 m-1
Z
Y
Table
boundary cell non-boundary cell
Figure
1: Data structure for run-length encoding of boundary cells.
be represented by integer numbers with 4 bytes each on SGI work-
stations, or 1 byte character each, if n is no more than 256. There-
fore, Equation 1 can be written as:
2.2 Boundary Cell Projection
By skipping over the runs of non-boundary cells, our run-length encoding
scheme not only provides high data compression, but also
leads to fast 3D scan over the volume during the boundary cell
projection procedure. Both parallel and perspective projections are
supported in our algorithm.
2.2.1 Parallel Projection
In parallel projection, the projected area of every cell has the same
shape and size in the image plane. Only the projected position and
distance of the cell center to the image plane are different from cell
to cell. Based on such a projection property, we employ a generic
projection template M to speed up boundary cell projection.
Establish Projection Template
In our algorithm, the generic projection template M has the same
size as the bounding box of the projected area of a cell on the image
plane. Each element of the template (template-pixel) has two
respective components recording near and far distances. To calculate
the distance values of template M , we first choose an arbitrary
cell from the volume, then place the center of the template
over the center point of the cell. We ensure that the template is not
only parallel to the image plane but also aligned with the primary
axes of the image plane. The origin is defined at the center of the
template. Three different levels-of-accuracy templates can be selected
in our algorithm. In the low level-of-accuracy template, the
far distance value of each template-pixel is distance d of the farthest
voxel of the cell to the template, and the near distance value of each
template-pixel is -d. In the middle level-of-accuracy template, distance
values of those template-pixels which are not covered by the
projected cell are set to infinity, and the remaining template-pixels
have the same values as those in the low level template. In the high
level-of-accuracy template, the near and far distance values of the
template-pixels are accurately calculated by scan-converting both
front facing and back facing surfaces of the cell.
Obviously, different level-of-accuracy templates provide different
accuracy of distance information. Note that in a high resolution
volume data set, the cells are very small and densely overlapped
from any viewing direction. Therefore, a rough approximation
based on the low level-of-accuracy template is often good enough
to support efficient skipping over empty space, as evidenced from
our experimental results.
Determine Projection Position
In parallel projection, once the first cell C(0; 0; 0) is projected onto
the image plane, the position of remaining cells can be quickly
calculated by incremental vectors with only addition operations.
Specifically, assume that the center point of the first cell C(0; 0;
is projected to position (x0 ; y0 ) on the image plane with depth z0 ,
and that \DeltaX; \DeltaY; \DeltaZ are respectively the vector directions of volume
axis X;Y;Z in image space, and volume size is Nx \Theta Ny \Theta Nz
with unit spacing between voxels. Then, position (x1 ; y1) and
depth z1 of the center point of the cells adjacent to cell C(0; 0;
along volume axes X;Y;Z can be respectively calculated by the
following equations:
When run-length encoding is used in our algorithm, the relationship
between two adjacent boundary cells in list L is more compli-
cated. These two adjacent cells could be located in the same run,
or in two adjacent runs at the same scanline, or in two runs at different
scanlines. Assume that the projection information of the two
adjacent boundary cells C i and C i+1 are respectively [x
and [x these two cells are located in the same
run, then [x can be found from [x
single vector addition operation, by using Equation 3. Otherwise,
if C i and C i+1 are located in different runs at the same scanline,
then [x can be calculated from [x with two
vector addition operations and one vector multiplication operation:
In the case that cell C i+1 is located on a new scanline in the same
slice or in a new slice, its projection information can be similarly
calculated from that of the first cell on the previous scanline or slice,
by respectively using Equations 4 and 5.
By using this incremental method, the time-consuming 4 \Theta 4
matrix multiplications for projection are applied solely to cell
rather than to all boundary cells. Thus, the projection
procedure is greatly accelerated.
Fill Projection Buffer
We utilize two projection buffers, Zn and Zf , having the same size
as the resultant image, to respectively record the nearest and farthest
intersection distances to the object boundary along the rays cast
from the image pixels.
Once the projected position of a boundary cell is determined, the
specific projection template M1 of this cell can be quickly generated
by adding the distance between the cell center and image plane
to both near and far distance values in each viable element of the
template M . The manner is straightforward for using current
template M1 to update distance values in the two projection
buffers. First, place template M1 over the image plane with its
center template-pixel over the image position where the cell center
is projected. Then, for each image-pixel covered by the template,
compare the nearest and farthest intersection distances (i.e., zn and
zf ) in the corresponding buffer-pixels with the non-infinity near
and far distances (i.e., dn and df ) in the corresponding element of
template M1 . Specifically, if zn is greater than dn , zn is replaced
by dn in the near z-buffer; meanwhile, if zf is less than df , zf is
replaced by df in the far z-buffer.
In the situation where the center point of the cell is not exactly
projected on an image pixel, one image pixel covered by the template
may be surrounded by two to four template-pixels. Thus,
nearest distance dn and farthest distance df of the surrounding
template-pixels are taken to give a conservative distance estimation.
2.2.2 Perspective Projection
Perspective projection is of particular importance when the viewing
point is getting close to the data or is located inside the volume,
such as during the interactive navigation inside the human colon
of our 3D virtual colonoscopy [11]. The implemented perspective
projection procedure in our algorithm is similar to the parallel
projection. We still adopt projection templates for fast projection.
However, since different cells have different perspective projection
shapes and sizes due to their different distances and directions to
the view point, there is no generic projection template for all cells.
Furthermore, the incremental method for finding the projection information
of the adjacent boundary cell does not work for perspective
projection. This is because cells aligned with a volume axis no
longer have fixed spacing on the screen.
During our implementation, the low level-of-accuracy template
has turned out to be the most competitive candidate among the
three. This low template is essentially a degenerated template, including
only the height, width, and near and far distances of the
projected boundary cell. Whenever one cell does not cover too
many screen pixels, the object boundary estimation based on these
low templates is satisfactory. Under perspective projection, such
a template for a specific boundary cell can be quickly generated.
This can be done by projecting the eight vertices of the cell onto
the image plane to find the bounding box of the projected area of
the cell as well as its minimal and maximal distances to the screen.
This template can be directly used to update the near and far projection
buffers. Furthermore, since there are four vertices shared
by two adjacent boundary cells in the same run, projection information
of these four shared vertices from one boundary cell can be
reused by the neighboring boundary cell for further speedup. As a
result, although perspective projection involves more computation
than parallel projection, it can still be done rapidly. Specifics of projection
time from our experiments on various data sets are reported
in Section 4.
2.3 Ray Traversal
Depending on the intersection distance information in projection
buffers Zn and Zf , the ray casting procedure is accelerated by casting
rays only from viable pixels on the image plane, and traversing
each ray from the closest depth to the farthest depth. Other effective
ray casting optimizations, such as adaptive image sampling
[12] and early ray termination [6], can be conveniently incorporated
to further speedup our ray traversal procedure. For example,
by employing early ray termination, the traversal along each viable
ray stops before the farthest intersection is reached if the accumulated
opacity has reached unit or exceeded a user-selected threshold
of opacity.
The ray traversal procedure of our algorithm is often rapidly
completed, because the overall complexity of the ray casting algorithm
is greatly reduced. Assume that the volume size is n 3 and
image size is n 2 . To generate such a ray casting image with parallel
projection, rendering complexity of a brute-force ray caster would
be O(n 3 ). In our algorithm, rendering complexity can be reduced
to O(kn 2 ). Although the value of k is data dependent, it is often
quite small compared with n, especially when early ray termination
is employed, unless a substantial fraction of the classified volume
has low but non-transparent opacity. Note, however, that such classification
functions are considered to be less useful [5].
In fact, our accelerated ray traversal speed sometimes becomes
so fast that it may approach boundary cell projection speed, especially
for larger data sets. When this happens, we are pleased to further
reduce the projection time by decreasing the resolution of the
boundary cells, since the accuracy of the current object boundary
estimation is unnecessarily high. One solution is to reduce the volume
resolution by merging m 3 neighboring cells into a macrocell.
If all the cells in a macrocell are non-boundary cells, this macrocell
is a non-boundary macrocell; otherwise, it is a boundary macrocell.
From our experiment with a 256 \Theta 256 \Theta 124 MRI data set of a
human brain, even merging the eight neighboring cells
the original volume leads to a three-fold decrease in cell projection
time and nearly the same ray traversal time.
Another approach is to use a lower levels-of-detail (LOD) volume
for fast object boundary estimation, and then use the original
high resolution volume for accurate ray traversal. This approach
may produce more accurate estimation, especially when the selected
value for m is large. Yet, the user should make sure that the
object represented in the lower LOD volume is not "thinner" than
its original size, which can be guaranteed either by the modeling
algorithm for LOD or by adjusting our projection templates.
2.4 Interactive Classification
In a practical application, the user may want to change the opacity
transfer function between renderings while exploring a new data
set. Most existing algorithms that employ spatial data structures
require an expensive preprocessing step when the transfer function
changes, and therefore can not support interactive volume classifi-
cation. Although our algorithm presented thus far works on a classified
volume with a fixed transfer function, it can easily support
interactive classification with some minor constraint on the modification
of the transfer function.
In our algorithm, we define an opacity threshold in a transfer
function as the minimal scalar field value in the volumetric data set
associated with a non-zero opacity. Once this opacity threshold is
given in the transfer function, all boundary cells can be determined,
of which some but not all the eight vertices possess field values
less than the opacity threshold. If the transfer function changes, the
previous run-length encoding of the boundary cells based on the
previous opacity threshold may not be an appropriate data structure
for the new object. Yet, note that an increase in opacity threshold
only shrinks the object volume coverage, and that an object with
a higher opacity threshold is always enclosed by an object with a
lower opacity threshold. Consequently, the run-length encoding of
an object boundary with a low opacity threshold can be used as
an overestimate of another object boundary with a higher opacity
threshold. It follows that, if we start from an object with the lowest
opacity threshold, and create run-length encoding of boundary cells
according to that opacity threshold, then we can avoid repeating the
preprocessing step for boundary cell detection and run-length en-
coding, when the opacity transfer function changesbetween render-
ings. We do this by always using the same run-length encoding data
structure as an overestimate for the new object boundary specified
by the modified transfer function with a higher opacity threshold.
Although we can now correctly render images of interactively
classified volume, the rendering rates may slow down greatly under
radical changesof transfer function, for two reasons. First, the number
of boundary cells in our fixed run-length encoding data structure
can be larger than that of the shrunken object specified by a higher
opacity threshold. This may lead to a longer projection time. Sec-
ond, such an overestimation for the shrunken object boundary may
cause longer ray traversal time due to unnecessary samplings outside
the shrunken object.
Fortunately, in a typical classified volume, 70 \Gamma 95% of the voxels
are transparent [7, 5]. From this we know that the total number
of boundary cells from each object is very small compared with the
volume size. The projection time of these boundary cells is further
shortened by employing run-length encoding and template-assisted
projection. Accordingly, the difference of projection time between
different objects is often minor. Also, since the possible objects are
crowded in a small part of the volume, the boundaries of these objects
are often so close to each other that the overestimation does not
cause much extra ray traversal time. In brief, our algorithm allows
interactive classification with a moderate performance penalty. The
experimental results from different data sets with both interactive
classification and fixed pre-classification are given in Section 4.
3 The Parallel Algorithm
In general, there are two types of task partitionings for parallel volume
rendering algorithms: object-based [2] and image-based partitionings
[1, 3, 4], respectively working on the volume and image
domains. In order to take full advantage of optimizations in the serial
algorithm, we have designed an object-based task partitioning
scheme for boundary cell projection, and an image-based partitioning
scheme for the ray traversal procedure. The shared-memory
architecture of the SGI Power Challenge fully supports the implementation
of our parallel algorithm.
3.1 Object-Based Partitioning for Boundary Cell
Projection
To achieve high processor utilization during the boundary cell projection
procedure, the volume should be carefully divided and assigned
to the processors so that each processor possesses a sub-set
of the volume with an equal number of boundary cells. Based
on our run-length encoding data structure, we are able to precisely
divide the volume into subvolumes of contiguous grid cells, each
containing a roughly equal number of boundary cells. For the convenience
of implementation, we used a run instead of a cell as the
fundamental unit of work. Compared with other options, such as
static interleaved partitionings and dynamic partitionings, our static
contiguous partitioning has several advantages. It maximizes spatial
locality in the run-length encoding data structure, and therefore
minimizes the memory stall time caused by cache misses. In addi-
tion, as a static scheme, less synchronization is required, and task
redistribution overhead is also avoided.
Once the volume is distributed to all available processors, each
processor works concurrently and independently on its subvolume
by scanning and projecting all related boundary cells onto the image
plane. Since the image plane is shared by all processors, each processor
establishes a separate pair of near and far projection buffers
with the same size as the resultant image, in order to avoid memory
access conflict. Each processor finishes its work and supplies a pair
of partial projection buffers within about the same span of time.
The complete (unified) projection buffers of the whole volume are
obtained by combining all of these partial projection buffers.
This combination procedure is also parallelized by dividing each
partial projection buffer into a few sub-buffers of an equal number
of contiguous buffer scanlines, with one sub-buffer per processor.
Each processor respectively combines all near and far sub-buffers
assigned to it, forming a pair of complete sub-buffers. By the end of
this process, we obtain a pair of complete projection buffers Zn and
Zf . Since the comparison and assignment operations performed
during this process are very fast, computation overhead of the combination
in our algorithm is very low.
Evidently, there is another solution to avoid memory access conflict
without creating and combining each pair of partial projection
buffers. All processors simultaneously access the shared projection
buffers Zn and Zf during the parallelized projection procedure in
an exclusive mode. Although the implementation is simpler, buffer
access time may slightly increase due to the exclusive access mode.
3.2 Image-Based Partitioning for Ray Traversal
In our algorithm, the projection buffers not only provide closer
bounds on the intervals where the ray integrals need to be calcu-
lated, but also view-dependent information of image complexity.
A static image-based contiguous partitioning is therefore a natural
choice.
Note that the amount of computation involved in a specific image
section can be calculated by the following formula:
where m is the number of viable pixels in that image section, d i is
the length of a bounded ray interval associated with the ith viable
pixel, and g is a function of length d i . The value of g(d i ) depends
on both length d i and the transparency property of the object to be
rendered. Generally speaking, the more transparent the object and
the greater the value d i , the larger the value g(d i ). The value of
can be adjusted during rendering to be more suitable for the
object, according to the load balancing feedback.
Once function g in Equation 7 is determined, the image is divided
into large image blocks of contiguous image scanlines. Each
block contains roughly an equal amount of work (see Figure 2).
The fundamental unit of work in our algorithm is an image pixel
rather than an image scanline, which supports more accurate partitioning
and hence better load balancing. Each processor then takes
one image block, casts rays from the viable pixels in that block, and
performs ray integrals within the bounded interval along each ray.
Our parallel ray traversal procedure is further accelerated by existing
ray casting optimizations, which fall into two classes according
to whether or not there are computational dependenciesbetween
rays. Those non ray-dependent optimizations, such as early ray ter-
mination, can be directly applied with an image-based partitioning.
However, when incorporating the adaptive image sampling which
belongs to the ray-dependent class, caution must be taken to avoid
the cost of replicated ray casting from pixels shared by different
processors.
In a serial ray casting algorithm, adaptive image sampling optimization
[12] is performed by dividing the image plane into fixed
size square image tiles of ! \Theta ! pixels, and casting rays only from
the four corner pixels of each tile. Additional rays are cast only
in those image tiles with high image complexity, as measured by
the color difference of corner pixels of the image tiles. All non
ray-casting pixels are then bilinearly interpolated from ray-casting
pixels. Nieh and Levoy [1] proposed a dynamic image-based partitioning
scheme which reduces the cost associated with pixel sharing
by delaying the evaluation of image tiles whose pixel values are being
computed by other processors.
In this paper, we propose a more effective solution based on
our static image-based contiguous partitioning. Compared to the
previous dynamic image partitioning scheme [1], our method has
no task redistribution overhead and thus fewer synchronization re-
quirements. The method is described as follows:
1. A small fixed size square image tile of ! \Theta ! pixels is defined
as the fundamental unit of work.
2. For P processors, the image is split into P large image blocks
of contiguous scanlines of tiles. Each block may not contain
the same number of tiles, but contains a roughly equal amount
of work.
3. Each processor takes one image block and performs adaptive
image sampling on each tile in that block top down in scanline
order. Note that for all shared pixels at the bottom of the image
block, the processor directly gets their values from the shared
memory, which have been computed by other processors.
Regular tile
Bottom tile
Top tile
An image tile of pixels
x
Figure
2: A static image-based contiguous partitioning example.
Figure
2 illustrates a four-processor example of how the image
is partitioned in our algorithm, where the fundamental unit of work
becomes a square image tile. All square image tiles which contain
the shared pixels by different processors are marked with dark shad-
ing. They are gathered at the top and bottom of each image block.
Therefore, tiles in each image block can be classified as: regular
tiles (white tiles in the figure), top tiles (tiles with dark shading),
and bottom tiles (with light shading). In our parallel algorithm,
each processor P i starts its work from the top tiles down in its image
in tile scanline order. For all the top and regular tiles,
normal adaptive image sampling are performed. However, for each
bottom tile, we read their values directly from the shared mem-
ory. Therefore, replicated ray casting and interpolation operations
at shared pixels are avoided.
Evidently, our algorithm works based on the premise that each
processor has approximately an equal amount of work to do. To
guarantee and test that no computation on the shared pixels is
missed by any of the processors, we set an "alarm" signal s for
each shared pixel at the top tiles, with initial values 1. Once a
shared pixel has been evaluated, its signal s is set to 0. If a processor
reaches a shared pixel with value 1 at its bottom tiles, the
alarm sounds to notify the user and stop the rendering. Our experimental
results have shown that our algorithm works well, and no
alarm has sounded so far.
It is possible that when more processors are used, we will some
day hear the alarm. When this happens, we would like to employ
a dynamic task stealing based on the above contiguous image partitioning
scheme. The dynamic scheme would produce better load
balancing, but also increase the synchronization overhead and implementation
complexity. We should realize, however, that when
more processors are available, the trend will be to render much
larger volume data sets with larger images. Then, the number of
processors would be still significantly lower than the number of
pixels, and thus our algorithm would remain competitive.
4 Experimental Results
Our algorithm has been implemented on an SGI Power Challenge
with processors. The performance results on classified
volume data sets are given in Table 1 and 2. The brain data set in
Table
1 is a 256 \Theta 256 \Theta 124 MRI scan of a human brain (Figure
3). The head data set in Table 2 is a 256 \Theta 256 \Theta 225 CT scan of
a human head (Figure 4). Rendering times include both the boundary
cell projection time and the subsequent ray traversal time, but
not the off-line preprocessing time for boundary cell detection and
run-length encoding. Preprocessing times are respectively 9:9 seconds
and 18:3 secondson a single processor for these two data sets.
We would like to point out that when the projection procedure is
parallelized on a multiprocessor, extra time is needed to combine
all partial buffers generated by the different processors. However,
our experiments have shown that combination times with buffer size
are negligible - less than our minimum measurable time (0:01
seconds).
In our preprocessing stage, we merged every eight neighboring
grid cells into one macrocell to reduce the amount of boundary
cells. Then, in the boundary cell projection procedure, we used
low level-of-accuracy projection templates and run-length encoding
data structure for both parallel and perspective projections. In
the subsequent ray traversal procedure, we performed resampling
(using trilinear interpolation), shading (using Phong model with
one light source), and compositing within each bounded ray interval
through the original volume data. The resultant images contain
256 \Theta 256 pixels. We selected an early-ray-termination opacity
cutoff of 95%. Ray traversal time with both adaptive and nonadaptive
(normal) image sampling were measured. In adaptive image
sampling, we used square image tiles of 3 \Theta 3 pixels along with a
minimum color difference of 25, measured as Euclidean distance in
RGB (256 \Theta 256 \Theta 256) space. The fastest rendering rates for both
data sets were above 20 Hz, among the fastest reported.
Table
1: Volume rendering times (in sec) for a 256 \Theta 256 \Theta 124
MRI brain data set.
Processors
Parallel Projection 0:16 0:04 0:02 0:01 0:01
Perspective Projection 0:49 0:12 0:06 0:04 0:03
Nonadaptive Traversal 1:32 0:34 0:17 0:12 0:09
Adaptive Ray Traversal 0:53 0:14 0:07 0:05 0:03
Best Frame Rate (Hz) 1:4 5:5 11:1 16:6 25:0
Table
2: Volume rendering times (in sec) for a 256 \Theta 256 \Theta 225 CT
head data set.
Processors
Parallel Projection 0:20 0:05 0:02 0:01 0:01
Perspective Projection 0:86 0:22 0:11 0:07 0:05
Nonadaptive Traversal 0:51 0:14 0:07 0:05 0:04
Adaptive Ray Traversal 0:27 0:07 0:04 0:03 0:02
Best Frame Rate (Hz) 2:1 8:3 16:6 25:0 33:3
Our experimental results have shown that the perspective boundary
cell projection times are about three to five times longer than
parallel boundary cell projection times, depending on the number of
boundary cells to be projected. However, subsequent ray traversal
times for both perspective and parallel views of the same volumetric
object are very close when the projected object has similar sizes
on the projection plane. Therefore, the resultant perspective rendering
times (including both projection and ray traversal times) are
less than three times longer than corresponding parallel rendering
times.
Figure
5 shows the speedup curves for both nonadaptive and
adaptive renderings (including the boundary cell projection time)
on the MRI brain data set with parallel projection. The speedup results
on the CT head data set are similar. There are two observations
from these speedup curves. First, our parallel program scales well
on a multiprocessor. These near linear speedups are ascribed to our
effective contiguous object- and image-based partitioning schemes,
which lead to both spatial locality and good load balancing. In our
boundary cell projection procedure, the computation work assigned
to each processor is a subvolume of contiguous run-length encoded
scanlines of boundary cells, and therefore provides good spatial lo-
cality. With such good spatial locality, we can effectively make
use of the prefetching effect of long cache lines on the Challenge,
which helps to mask the latency of main memory accesses. In fact,
the two medical data sets in Tables 1 and 2 have significant coher-
ence. With the opacity transfer functions we used, 3:1% and 3:2%
of the grid cells in the MRI and CT data sets are boundary cells. Ac-
cordingly, the run-length encodings of the boundary cells are very
small compared to the original volume. When such short run-length
encodings are split and assigned to the multiprocessors, they can be
easily fixed inside the local caches of these processors with minimal
cache misses. Evidently, our ray traversal procedure also benefits
from spatial locality provided by our contiguous image-based par-
titioning, since adjacent rays access data from the same cache line.
Number of Processors
Nonadaptive Rendering
Adaptive Rendering
Figure
5: Speedupsof rendering the MRI brain data set on the Challenge
The second observation is that the speedups for adaptive rendering
are nearly as good as those for nonadaptive rendering. Unlike
the results reported by Nieh and Levoy [1] - where adaptive rendering
always exhibits worse speedups than nonadaptive rendering,
due to extra memory and synchronization overhead - our parallel
algorithm shows more efficient adaptive rendering. In their algo-
rithm, memory overhead is larger for adaptive rendering because
access to additional shared writable data structures such as the local
wait queue is not needed in nonadaptive rendering. Additional
synchronization time is also required for the adaptive case, due to
the waiting for all processors to complete ray casting before non
ray-casting pixels are interpolated from ray-casting pixels. How-
ever, in our algorithm, neither the additional shared writable data
structure nor additional synchronization time is needed for adaptive
rendering. This is because the cost of replicated ray casting is
avoided by our load balancing image partitioning scheme without
dynamic task redistribution.
To show the performance of our load balancing schemes, we collected
the times of both parallel projection and subsequentnonadap-
tive and adaptive ray traversal procedures on each processor during
the rendering of the MRI brain data set using twelve processors. Ta-
Table
3: Computation distribution (in sec) of the MRI brain data
set on 12 processors.
Procedures Projection Nonadaptive RT Adaptive RT
Variation 0:00 0:01 0:00
ble 3 shows that the variations in rendering times among processors
for adaptive and nonadaptive ray traversal are respectively zero and
0:01 seconds. A good load balancing was also reached during the
boundary cell projection with no measurable variation in projection
times among processors. Note that we present load balancing performance
on 12 rather than 16 processors of our Challenge. This
is because when more processors are used, the projection times are
too short (often less than 0:01 seconds) for the purposes of compar-
ison. Also, in Tables 4-6, we present the rendering rates for several
data sets for up to 12 processors (Proc#).
Table
4: Volume rendering times (in sec) for a positive potential of
a high potential iron protein data set. (Proj: projection time; Ray:
ray traversal time; L: overview with lower opacity threshold 10; H:
interior with higher opacity threshold 120.)
Interactive Classification Fixed Classification
Proc# Proj(L) Ray(L) Ray(H) Proj(H) Ray(H)
In
Table
4, a commonly used 66 3 voxel positive potential of a
high potential iron protein was rendered by using modified opacity
transfer functions with different opacity thresholds between frames.
The slow preprocessing stage for run-length encoding was avoided
in our algorithm, provided that new opacity thresholds were never
less than the initially specified opacity threshold. We set the initial
opacity threshold to 10 (Figure 6a), and the modified threshold
to 120 (Figure 6b). The ray traversal times did not increase with
the modification. Projection times did not change since we did not
change the view. Therefore, interactive rendering rates were maintained
during rendering with interactive classification. Similar results
are shown in Table 5, where a 320 \Theta 320 \Theta 34 CT scan of
a lobster was rendered with interactive classification. The initial
opacity threshold was set to 30 to display the semi-transparent shell
7a). The new opacity threshold was set to 90 to display the
meat without the shell (Figure 7b).
Table
5: Volume rendering times (in sec) for a lobster data set.
(Proj: projection time; Ray: ray traversal time; L: shell with low
opacity threshold 30; H: meat with high opacity threshold 90.)
Interactive Classification Fixed Classification
Proc# Proj(L) Ray(L) Ray(H) Proj(H) Ray(H)
For comparison purposes, we also rendered these two data sets
with fixed classification. For each data set, we first recreated the
run-length encoding data structure with the new boundary cells according
to the modified opacity threshold. Then, we rendered the
data set at the same view by using the modified transfer function
and the new run-length encoding data structure. Tables 4 and 5
show that rendering times with interactive classification are almost
twice as long as those with fixed classification. We also measured
the different number of boundary cells with different opacity thresh-
olds, and found that the number of boundary cells corresponding to
the modified opacity thresholds was about two thirds of that corresponding
to the initial opacity thresholds for these two data sets.
It follows that the performance penalty in both rendering rate and
memory space is moderate for interactive classification.
Table
Volume rendering times (in sec) for a voxelized F15 aircraft
data set using different kinds of multiresolution volumes for
object boundary estimation. (Para; parallel projection time; Pers:
perspective projection time; Ray: ray traversal time.)
Shrunken Volume Low LOD Volume
Proc# Para Pers Ray Para Pers Ray
4 0:04 0:11 0:06 0:02 0:04 0:04
In order to further speedup the boundary cell projection time for
larger data sets, we used a lower resolution volume during the procedures
of boundary cell detection, run-length encoding, and pro-
jection. We still used the original high resolution volume for accurate
rendering. Such a lower resolution volume can be either a
shrunken volume generated from the original volume by merging
every m 3 neighboring cells into a macrocell, or a low LOD volume.
We conducted some experiments on a rendering of a 186 \Theta 256 \Theta 76
voxelized F15 aircraft data set (Figure 8a). We separately used two
run-length encodings created from a shrunken volume (m=2) and a
low LOD volume. The low LOD volume had 93 \Theta 128 \Theta 41 voxels
(see Figure 8b). The object modeling algorithm [14] we used guaranteed
that the shape of the aircraft in the low LOD volume was not
"thinner" than that in the original high resolution volume (as shown
in
Figure
8). Table 6 shows that both projection and ray traversal
times from using run-length encoding of the low LOD volume are
faster than those of the shrunken volume. We discovered that even
though fewer boundary cells were contained in the low LOD vol-
ume, they led to a more accurate object boundary estimation, and,
therefore, more time savings in both projection and ray traversal
procedures. Also note that although we have employed a lighting
model and 3D texture mappings (both implemented in software during
the rendering time), the ray traversal speeds are very fast. This
is because the binary classification of the aircraft decreased the rendering
complexity to nearly O(n 2 ), for an image size of n 2 .
5 Comparison to Shear-Warp
The shear-warp factorization technique [5] is another fast volume
rendering method, which has several similarities to our algorithm.
The comparison between these two is helpful in evaluating ours.
First, both methods are high speed volume rendering algorithms
without graphics hardware acceleration. Their high performances
are reached by combining the advantages of image- and object-order
approaches, and are therefore scalable. Rendering rates as
fast as 10-30Hz are reported for both methods to render the same
volume data set on the 16-processor SGI Challenge. While
our method inherits a high image quality from accurate ray casting,
the shear-warp method suffers from some image quality problems
due to its two-pass resampling and the 2D rather than 3D interpolation
filter (as reported in [5]),
Second, the theoretical fundamentals of both methods are directly
or indirectly based on the normal ray casting algorithm [13].
Our method directly speeds up the ray casting algorithm by efficiently
skipping over empty space outside the classified object without
affecting image quality. Thus, existing ray casting optimiza-
tions, such as early ray termination and adaptive image sampling,
can be conveniently incorporated into our algorithm. The shear-warp
method can also be viewed as a special form of ray casting,
where sheared "rays" are cast from voxels in the principal face of
the volume. Bilinear rather than trilinear interpolation operations
are used on each voxel slice to resample volume data (which shortens
rendering time, and also reduces image quality). The effect of
ray termination is also achieved.
Third, both methods employ the scanline-based run-length encoding
data structure to encode spatial coherence in the volume for
high data compression and low access time. In our algorithm, the
small number of boundary cells compared to the volume size leads
to minimal extra memory space for run-length encoding. Obvi-
ously, we still need the original volume during the ray traversal
procedure. In the shear-warp method, although three encoded volumes
are required along the three volume axes, the total memory
occupation is reported to be much smaller than the original volume.
Fourth, both methods support interactive classification, with similar
moderate performance penalties. In our method, interactive
classification performs without extra programming efforts, providing
the modified opacity threshold is never less than the initial opacity
threshold. In the shear-warp method, a more sophisticated solution
is presented with some other restrictions.
Fifth, both methods are parallelized on shared memory multi-processors
and show good load balancing. Dynamic interleaved
partitioning scheme is employed in the parallel shear-warp algorithm
[3], while static contiguous partitioning schemes are used in
our method. Both methods exploit spatial locality in the run-length
encoding data structure. In general, our contiguous partitioning of
the volume provides higher spatial locality than interleaved parti-
tioning. Also, compared to dynamic partitioning, our static scheme
is more economical due to a simplified controlling mechanism and
lower synchronization overhead.
We have compared the performance of parallelized shear-warp
algorithm reported by Lacroute [3] with our experimental results,
for the same 256 \Theta 256 \Theta 225 voxel CT head data set, achieved on
the Challenge with processors. The fastest shear-warp rendering
rate is 13 Hz for a 256 \Theta 256 grey scale image with parallel pro-
jection. The rendering time doubles for a color image because of
additional resampling for the two extra color channels. We reached
a rendering rate of 20 Hz (or 33 Hz, when adopting adaptive image
sampling) for color images of the same size, as shown in Figure 4.
6 Conclusions
We have proposed an interactive parallel volume rendering algorithm
without using graphics hardware accelerators. It is capable of
rendering per second on a 16-processor SGI Power
Challenge for 256 3 volume data sets. We achieved these speeds by
using an accelerated ray casting algorithm with effective space leaping
and other available optimizations, and contiguous task partitioning
schemeswhich take full advantage of optimizations in the serial
algorithm with high load balancing and low synchronization over-
head. When compared with the shear-warp approach, our method
has shown both faster rendering speed and higher image quality.
Following the encouragingexperimental results, we are currently
investigating interactive ray casting for very large data sets with our
algorithm. Run-length encoding of lower levels-of-detail volume
data are being studied to create a near accurate object boundary
estimation with much fewer boundary cells. The original full resolution
data set will be utilized during ray traversal procedure for
high-quality ray casting images.
Acknowledgments
This work has been partially supported by NASA grant NCC25231,
NSF grant MIP9527694, ONR grant N000149710402, NRL grant
N00014961G015, and NIH grant CA79180. Thanks to Huamin Qu,
Kevin Kreeger, Lichan Hong, and Shigeru Muraki for their constructive
suggestions and to Kathleen McConnell for comments.
Special thanks to Milos Sramek for providing the multiresolution
volumes of the F15 aircraft. The MRI data set is courtesy of the
Electrotechnical Laboratory (ETL), Japan.
--R
"Volume Rendering on Scalable Shared-Memory MIMD Architectures"
"Parallel Performance Measures for Volume Ray Casting"
"Real-Time Volume Rendering on Shared Memory Multiprocessors Using the Shear-Warp Factorization"
"Interac- tive Ray Tracing for Isosurface Rendering"
"Fast Volume Rendering using a Shear-warp Factorization of the Viewing Transformation"
"Efficient Ray Tracing of Volume Data"
"Applying Space Subdivision Techniques to Volume Rendering"
"Boundary Cell-Based Acceleration for Volume Ray Casting"
"Towards a Comprehensive Volume Visualization System"
"Rendering Volumetric Data Using the STICK Representation Scheme"
"Vol- ume Rendering Based Interactive Navigation within the Human Colon"
"Volume Rendering by Adaptive Refinement"
"Display of Surface from Volume Data"
"Object Voxelization by Filtering"
--TR
Display of Surfaces from Volume Data
Efficient ray tracing of volume data
Volume rendering by adaptive refinement
Rendering volumetric data using STICKS representation scheme
Volume rendering on scalable shared-memory MIMD architectures
Fast volume rendering using a shear-warp factorization of the viewing transformation
Real-time volume rendering on shared memory multiprocessors using the shear-warp factorization
Object voxeliztion by filtering
Interactive ray tracing for isosurface rendering
Volume rendering based interactive navigation within the human colon (case study)
Applying space subdivision techniques to volume rendering
Towards a comprehensive volume visualization system
Parallel performance measures for volume ray casting
--CTR
Lukas Mroz , Rainer Wegenkittl , Eduard Grller, Mastering interactive surface rendering for Java-based diagnostic applications, Proceedings of the conference on Visualization '00, p.437-440, October 2000, Salt Lake City, Utah, United States
Anna Vilanova , Balint Hegeds , Eduard M. Grller , Daniel Wagner , Rainer Wegenkittl , Martin C. Freund, Mastering interactive virtual Bronchioscopy on a Lowend PC, Proceedings of the conference on Visualization '00, p.461-464, October 2000, Salt Lake City, Utah, United States
Ming Wan , Qingyu Tang , Arie Kaufman , Zhengrong Liang , Mark Wax, Volume rendering based interactive navigation within the human colon (case study), Proceedings of the conference on Visualization '99: celebrating ten years, p.397-400, October 1999, San Francisco, California, United States
Gunter Knittel, The ULTRAVIS system, Proceedings of the 2000 IEEE symposium on Volume visualization, p.71-79, October 09-10, 2000, Salt Lake City, Utah, United States
Benjamin Mora , Jean-Pierre Jessel , Ren Caubet, Accelerating volume rendering with quantized voxels, Proceedings of the 2000 IEEE symposium on Volume visualization, p.63-70, October 09-10, 2000, Salt Lake City, Utah, United States
Feng Dong , Gordon J. Clapworthy , Mel Krokos, Volume rendering of fine details within medical data, Proceedings of the conference on Visualization '01, October 21-26, 2001, San Diego, California
Ming Wan , Aamir Sadiq , Arie Kaufman, Fast and reliable space leaping for interactive volume rendering, Proceedings of the conference on Visualization '02, October 27-November 01, 2002, Boston, Massachusetts
Ming Wan , Nan Zhang , Huamin Qu , Arie E. Kaufman, Interactive Stereoscopic Rendering of Volumetric Environments, IEEE Transactions on Visualization and Computer Graphics, v.10 n.1, p.15-28, January 2004
Rdiger Westermann , Bernd Sevenich, Accelerated volume ray-casting using texture mapping, Proceedings of the conference on Visualization '01, October 21-26, 2001, San Diego, California
Benjamin Mora , Jean Pierre Jessel , Ren Caubet, A new object-order ray-casting algorithm, Proceedings of the conference on Visualization '02, October 27-November 01, 2002, Boston, Massachusetts | run-length encoding;projection template;presence acceleration;parallel processing;volume rendering;multiresolution volumes;interactive classification |
319725 | Anonymous authentication with subset queries (extended abstract). | We develop new schemes for anonymous authentication that support identity escrow. Our protocols also allow a prover to demonstrate membership in an arbitrary subset of users; key revocation is an important special case of this feature. Using the Fiat-Shamir heuristic, our interactive authentication protocols yield new constructions for non-interactive group signature schemes. We use the higher-residuosity assumption, which leads to greater efficiency and more natural security proofs than previous constructions. It also leads to an increased vulnerability to collusion attacks, although countermeasures are available. | Introduction
Consider an o-ce building where each employee is given
a smartcard for opening the front door to the building.
Employees are often concerned that their movements in
and out of the building are being recorded. Consequently
it is desirable that the authentication protocol carried out
between the smartcard (prover) and the door lock (ver-
ier) does not identify the employee. This is the basic
problem of Anonymous Authentication: a user wishes to
prove that she is a member of an authorized group (e.g.
employees that are allowed to enter the building), but
does not want to reveal her identity.
The simplest solution to anonymous authentication is
to give all employees a copy of the same secret key. This
way the door lock has no information as to which employee
it is authenticating. But then if a crime is committed
inside the building there is no authorized identity
escrow agent that can undo the anonymity and determine
who was present in the building at the time. More seri-
ously, there is no easy way to revoke a user's key without
Supported by NSF.
reissuing keys to all participants.
In this work, we develop new schemes for anonymous
authentication that support identity escrow and key re-
vocation. The identication transcript by itself reveals
nothing further about the prover's identity. The transcript
together with an \escrow" key reveals the prover's
identity completely. The security of our schemes rests
on the higher-residuosity assumption, rst considered by
Cohen and Fisher [14, 3], as well as on the di-culty of
computing modular roots. The use of higher-residuosity
leads to increased e-ciency and more natural security
proofs than previous constructions. It also increases vulnerability
to an attack by colluding provers to mask their
identities from the escrow agent, although countermeasures
are available.
We also extend our schemes to allow a prover to
demonstrate membership in an arbitrary subset of users,
anonymously and with identity escrow. In our o-ce building
scenario, each door might have a dierent list of authorized
employees. To gain access, an employee proves
to a door that is on its list. Revocation of a user's key
is an important special case of this feature | the subset
query is simply revised to exclude everyone on a revocation
hotlist.
Using the Fiat-Shamir heuristic [18], our interactive
authentication protocols yield non-interactive group signature
schemes of great e-ciency. A group signature on
a certain message can be veried by anyone, but the signature
no information as to which member of the
group generated it.
1.1 Related work
Group signatures [11, 12, 8, 24, 2] were rst described
by Chaum and van Heyst. Recently, Camenisch and
Stadler [10] present a solution that is much more ecient
than previous solutions, although it relies on somewhat
unusual security assumptions. In particular, given
an RSA key e; N and an element a 2 Z
N of large multiplicative
order, it must be infeasible to produce a triple
such that a
(mod N ). The heuristic nature of these security assumptions
is underscored by recent vulnerabilities found by
Ateniese and Tsudik [2]. Kilian and Petrank [22] present
anonymous authentication schemes with identity escrow,
based on similar security assumptions.
Anonymous authentication with subset queries, but
without identity escrow, follows as an application of
proofs of partial knowledge [15, 17, 16]. There are a
number of proposals for payment schemes with revocable
anonymity, beginning with Brickell, Gemmell and
Kravitz [7].
1.2 Terminology and Denitions
We give some terminology and denitions. For a more
formal treatment of an identity escrow model, we refer the
reader to Kilian and Petrank [22]. In our authentication
schemes, there are \users" who authenticate themselves
to \veriers". There is an \issuer" who gives a secret key
to each new user when the user is added to the system.
There is an \escrow agent" who examines the transcript
of an authentication protocol to determine the identity
of the user. A \subset query" enables the user to prove
membership in some subset of the user population.
We say that an authentication protocol is \sound" if
the verier rejects with overwhelming probability when
the prover is not a legitimate user. A subset query protocol
is sound if the verier rejects with overwhelming
probability when the prover is not in the designated sub-set
of users. We say that a subset query protocol is k-
resilient against an \outsider attack" if no coalition of k
users outside the designated subset can fool the verier.
An authentication protocol is \anonymous" if no information
about the prover's identity is revealed, other
than that the prover is a legitimate user (or for a subset
query protocol, that the prover is in a designated subset).
This includes inferences that might be drawn from multiple
executions of the protocol (\unlinkability"). Note that
anonymity implies that no information about the user's
secret is revealed to the verier.
An authentication protocol has \identity recovery" if
the escrow agent can determine the identity of the prover
from a transcript of the protocol together with some trap-door
information. We say that a protocol is k-resilient
against a \masking attack" if no coalition of k users can
cause the verier to accept while the escrow agent is unable
to determine any of their identities.
1.3 Summary of Results
In Section 2, we present our basic scheme for anonymous
authentication. It is sound if it is hard to extract
roots modulo a composite of unknown factorization. It is
anonymous if it is hard to distinguish high-order residues
from non-residues. It is only 1-resilient against a masking
attack. The scheme can be made k-resilient as discussed
below. In Section 2.3, an instantiation is given in which
the entire communication complexity is only ve RSA-
sized values sent over three rounds.
In Section 3, we present a scheme for anonymous authentication
with arbitrary subset queries. Soundness and
anonymity follow from the same hardness assumption as
the basic scheme. The communication complexity is un-
changed, although the work performed by prover and ver-
ier is proportional to the size of the query. It is only 1-
resilient against either an masking attack or an outsider
attack. It can be made k-resilient against an outsider attack
with an increase of a factor of O(m k ) in the work of
the verier and prover, where m is the total number of
users. Resilience against an outsider attack of any size is
possible at a work increase of O(), by limiting queries to
a predetermined collection of subsets.
In Section 4, we show how to achieve k-resilience
against a masking attack for all of our schemes by applying
ideas from collusion-secure ngerprinting codes [6].
This countermeasure is not particularly e-cient. It is an
open problem to design more e-cient countermeasures
that are provably secure under natural cryptographic as-
sumptions. In Section 5, we give a summary and open
problems.
2 The basic scheme
We present an e-cient scheme that provides anonymous
authentication with identity escrow. The scheme is 1-
resilient against a masking attack. Let m be the total
number of users to be authenticated. Let ' > m be the
smallest prime larger than m. Our scheme is built on
top of any proof of knowledge for the the ''th root of a
number modulo pq. For example, the scheme can be
built on top of Guillou-Quisquater authentication [20].
Initialization To initialize the system the issuer performs
the following steps:
1. It generates an n-bit RSA modulus
that ' divides both p 1 and q 1 but ' 2 divides
neither. The factors p and q are kept secret.
2. It picks a random t 2 ZN and sets
In addition the issuer sets 2 ZN to be some
''th root of unity such that 6= 1 mod p and
The values N; T; ' are made public. The values t;
are kept secret.
Issuing a key To issue a key to user number i (recall
the issuer gives user i the secret key
. Note that i is some ''th root of T .
Proving identity When user i wishes to authenticate
itself to a verier (e.g. a door lock) it executes the
protocol below. The protocol simultaneously checks
two things: (1) the user knows an ''th root of T , and
(2) the blinding factor r ' used during the protocol is
an ''th residue modulo N .
1: The user picks random r 2 Z
N . It computes
mod N and
It sends (u; y) to the the verier.
Step 2: The verier checks that y
and rejects if not.
Step 3: The user proves in zero-knowledge that u
is an ' 2 'th residue modulo N . He does so by
proving knowledge of an ' 2 'th root of u. Any
of a number of protocols can be used for this
purpose [21, 20].
The communication complexity of this protocol is logarithmic
in the number of users. This is optimal from an
information theoretic point of view since otherwise the
transcript does not contain enough information for the
escrow agent to recover the user's identity. In Section 2.3
we show an instantiation of this protocol with just three
rounds of communication and seven modular exponentiations
2.1 Proof of Security
We show that the identication protocol is sound, reveals
no information about the user's secret, and reveals no
information about the user's identity.
Lemma 2.1 Let P be a user for which the authentication
protocol succeeds with probability at least . Then there
exists a polynomial time (in n and 1=) extractor that extracts
an 'th root of T from P. Consequently, any prover
for which the authentication protocol succeeds knows an
''th root of T .
Proof In Step 3 we use a proof of knowledge of an
th root of u. Therefore, there exists a polynomial
time (in n and 1=) extractor that when interacting
with P extracts an ' 2 th root of u. Let r be the extracted
root. Since the verier accepts the interaction
with P we know that y
Therefore y
mod N and hence y=r ' mod N
is an 'th root of T . The extractor outputs y=r ' mod N .
To prove the protocol is zero knowledge we show a simu-
lator. Let
Z (2)
The correctness of the simulation relies on a standard
assumption that for N 2 Z (2) (n) the set of ''th residues
in Z
N is indistinguishable from all of Z
N .
Lemma 2.2 Assuming indistinguishability of ''th
residues of random N 2 Z (2) (n), the identication
protocol can be simulated by a polynomial time simulator
S.
Proof Let V be some verier. We show how to simulate
the interaction of the prover with V . First the simulator
S picks a random y 2 Z
N . It computes
and outputs (u; y) as the rst part of the simulation. Since
in Step 3 we use a zero-knowledge proof of ' 2 'th residuosity
there exists a simulator S 0 that takes u; N and V and
generates a transcript of Step 3 indistinguishable from a
real transcript. We show that by concatenating (u; y) and
the output of S 0 we get a transcript indistinguishable from
a real transcript.
Suppose a distinguisher D exists. We construct a
distinguisher D 0 that distinguishes between a random
N and a random y 2 Z
N . This will contradict the
assumption. On input x 2 Z
N algorithm D 0 works as
follows: First it picks a random t 2 Z
N and sets
It then computes It runs S 0 on
Finally it runs D on the concatenation of
(u; y) and the output of S 0 and outputs the same answer
as D. When x is an ''th residue modulo N we give D
the same distribution as in the real interaction. When x
is random in Z
N we give D the simulated distribution.
Hence, by denition of D algorithm D 0 will have the
required properties.
The simulation shows that no information about the
users identity is exposed during the protocol. The simulation
also proves unlinkability: the verier cannot determine
whether the same user interacts with the verier
multiple times.
We note that our hardness assumption is slightly different
from the one introduced by Cohen and Fisher [14].
In their setting, q 1 is not a multiple of ' (while p 1 is
still a multiple of ' but not ' 2 ). This would be quite dangerous
for our scheme, because two colluding users could
compute In our settings ' divides
both p 1 and q 1.
2.2 Recovery by Escrow Agent
We now show how an escrow agent can recover the user's
identity given the transcript of our identication protocol.
In the simplest version of the scheme the escrow agent is
given p and mod p as its secrets. It recovers the user's
identity from the value y sent by the user during the iden-
tication protocol.
For an honest user i we know that
unknown r. Since
Indeed, r does not divide it follows
that has order '. Therefore, there exists a unique
To recover the user's
identity the escrow agent tries all until an i
is found satisfying condition 1. By using a \baby-step,
giant-step" trick, the work of the escrow agent can be
reduced to O(
log ') with O(
log ') precomputation
and O(
storage.
The proof that a single malicious user cannot hide its
identity from the escrow agent depends on a standard
assumption known as the ''th root of unity assumption.
Namely, for a prime ' > 2, no polynomial time algorithm
can nd (with non negligible probability) a non trivial
''th root of unity in Z
N for a random N 2 Z n
(2) .
Lemma 2.3 Suppose user i can employ a polynomial
time adversarial prover P so that (1) the verier accepts
the interaction with probability at least , and (2) the escrow
agent fails to recover the user's identity from the
transcript with probability at least . Then the ''th root of
unity assumption is false.
Proof Pick a random N 2 Z (2) . We show how P
can be used to nd an ''th root of unity modulo N
with probability at least . Pick a random t 2 Z
and compute
i to be t. Then given N; T and i prover P
will succeed in masking i's identity with probability
. Run the extractor of Lemma 2.1 on prover P . Let
t 0 be the result. We know t 0 is an ''th root of T .
One can show that with probability at least we have
is a non-trivial ''th root of unity.
A coalition of two or more users can create a new key
that cannot be traced by the escrow agent. Indeed, if i
and j are keys belonging to two users then
N is a valid key whenever a+ b = 1. Furthermore, does
not reveal the identity of either user. This attack will
frame an innocent user if the index ia
assigned. If the indices are issued secretly from a sparse
subset of size m << ', then the probability that a masking
attack will frame an innocent user can be greatly reduced.
We discuss how to make our scheme k-resilient against a
masking attack in Section 4.
2.3 An instantiation using Guillou-Quisquater
We describe an e-cient instantiation of our basic authentication
protocol.
1: The user picks random
N , and sends
mod N) to the verier.
Step 2: The verier sends back a random c 2 Z
N .
Step 3: The user responds with
Step 4: The verier accepts the authentication if
u c v mod N .
The protocol takes only three rounds of communication
and is e-cient in computation. The proof of security
follows from the proof of the Guillou-Quisquater protocol
and Lemmas 2.1 and 2.2. Recall that the GQ protocol
is only provably secure against a passive verier (i.e. a
verier that properly follows the protocol and then tries
to gain some information). Consequently our instantiated
protocol is provably secure in the same model. As always,
this is the price one has to pay for e-ciency [25].
2.4 Group Signature
Using the Fiat-Shamir heuristic [18], our scheme can give
an anonymous signature algorithm with identity escrow
(i.e., a \group signature" as introduced by Chaum and
van Heyst [11]). For example, consider the Guillou-
Quisquater instantiation from the previous subsection.
Let H be a cryptographically strong hash function. A
group signature of message m is (u; v;
mod
mod
and This technique can be applied whenever
the proof of knowledge from step 3 of the authentication
protocol is a public-coin protocol.
2.5 Security Enhancements
Here are some additional security enhancements that can
be applied to our basic scheme. These enhancements apply
to later schemes as well.
Untrusted escrow It is possible in all of our schemes
to prevent the escrow agent from being able to masquerade
as one of the users. The idea is that the
escrow agent need not know the full factorization of
N . For example, in the basic scheme we may use a
modulus N that is a product of three prime
The escrow agent need only be given p to recover the
user's identity. Thus, without knowing the complete
factorization of N it cannot masquerade as one of
the users.
Initialization There is no need for the issuer to generate
N and . Instead one can use the techniques of [5]
so that N and are generated among k parties and
none of them know the factorization of N .
3 Subset Queries
In this section we discuss how to add subset queries to
our basic scheme. We show that the basic scheme can be
extended to enable a user to anonymously prove membership
in a given subset of users. Key revocation is a special
case of this feature: to revoke user i's key the verier (e.g.
the doorlock) asks the prover (e.g. a user's smartcard) to
prove membership in the subset P containing all users except
user i. All users except for user i will be successful
in their interaction with the verier. User i will be unable
to complete the proof.
We begin with a version that is simple to explain, allows
every subset query, and is very e-cient. Unfortu-
nately, it is only 1-resilient to both a masking attack and
an outsider attack. Then we give variations that increase
the resistance to an outsider attack. The communication
complexity of all of the protocols in this section is the
same as the basic scheme from Section 2, although there
is an increase in the computation performed by the prover
and verier. The ideas on which these constructions are
based have appeared and reappeared in the literature; our
work was most in
uenced by versions described in Chick-
Tavares [13] and Fiat-Naor [19].
3.1 Simple Subset Query Protocol
Here is a simple scheme that allows every subset query.
It is only 1-resilient against an outsider attack. The work
performed by the prover and verier increases over the
basic scheme, although the communication complexity remains
the same. Let m be the number of users.
Initialization The system issuer generates
in the basic scheme. The issuer then computes
are distinct
small primes dierent from '. The values
are made public (but not T ).
Issuing a key To issue a key to user number i (recall
the issuer gives user i the secret key
. Note that
i is as in the basic scheme.
Proving identity User i proves membership in an arbitrary
set of users P as follows:
1: The user picks random r 2 Z
N . It computes
mod N and
It sends (u; y) to the
verier.
Step 2: The verier checks that
and rejects if not.
Step 3: The user proves in zero-knowledge that u is
an ' 2 'th residue modN as in the basic scheme.
Instead of proving possession of an 'th root of T mod
N as in the basic scheme, here the user is proving possession
of an 'th root of T w P mod N . In fact, the user
does this by proving possession of an 'p1 ::pm=wP th root
of T mod N . It is important that T is used here while
T is secret, because a single user could mount a gcd-based
outsider attack from
Zero-knowledge follows as for the basic scheme. For
identity recovery, the escrow agent nds the unique solution
to y w 1)='. For
soundness, we can adapt our proof of Lemma 2.1, incorporating
ideas from a proof of Akl and Taylor [1]:
Lemma 3.1 If a user i 62 P can prove membership in P
with probability at least , then there exists a polynomial
time (in n and 1=) algorithm for taking arbitrary p i th
roots modulo N .
Proof We describe a root-nding algorithm that takes
as input an arbitrary z 2 Z
N , and outputs z 1=p i mod N .
The algorithm rst computes ^
It
then plays the role of the prover in the authentication
protocol with public values
In time polynomial in n and 1=, this produces
a that the verier accepts.
By the proof of knowledge in Step 3, there is
an extractor that can interact with the prover
and output an ' 2 th root of ^ u. Let ^
r be the extracted
root. By the verication in Step 2, we know
that
which implies that
we know that
and so there exist integers a; b such that
1. Then the root-nding algorithm can
compute (^y=^r ' ) b z a z bw P =p i z ap i =p i z 1=p i mod N .
The scheme is useful in environments where there is
no fear of users colluding. Unfortunately, if two users
outside of P get together they can combine their secret
j to create a new key
that will let them
prove membership in P . Indeed, from
j it is easy to
compute some 'th root of T , and thus some 'th root of
any P . We refer to such an attack as an
outsider attack. More generally, we say that a scheme is
k-resilient against an outsider attack if no k users outside
of P can combine their secret keys to create a new prover
that will fool the verier into thinking it is a user in P .
The above discussion shows that, as is, the scheme is only
1-resilient against an outsider attack. In the next subsec-
tion, we show how to increase the resilience against an
outsider attack.
As in the previous section, the above scheme is only
1-resilient against a masking attack. We discuss how to
make the scheme k-resilient against a masking attack in
Section 4.
3.2 Increased Resilience against Outsider Attack
We describe a generalization that achieves k-resilience
against an outsider attack. It also allows every subset
query. The work performed by the prover and verier
increases over the basic scheme, while the communication
complexity remains the same. Despite increased resilience
against an outsider attack, note that we still have
the same vulnerability to a masking attack as the basic
scheme.
all subsets of size at
least m k. Let be distinct primes. Let
issued the secret key
. The value T
public. For any subset P of users, let
and let
To prove membership in an
arbitrary subset P , the user proves possession of an 'th
root of T w P mod N . It is easy for any i 2 P to compute
such a root, since wP is a multiple of w i whenever
It is easy to prove possession of such a root to a verier
who knows T , by proving possession of an '
wP th root of
as in the proof of identity in Section 3.1.
Zero-knowledge and identity escrow follow as for the
previous scheme. For soundness versus any coalition F of
size at most k, note that
Fg. But if F \
does not divide wP . That is a contradiction unless
the extractor can compute p j th roots of arbitrary values
modulo N [1].
3.3 Eliminating Outsider Attacks for Limited Queries
The subset query schemes that we have discussed so far
support every possible subset query. Sometimes there is a
smaller xed collection of subsets Pa from which
the queries will come. This is true for the o-ce building
scenario from Section 1 when there is no revocation, i.e.,
each door has a xed set of authorized entrants. In this
case, we can proceed as follows to get resistance against an
outsider attack of any size, with a work increase of O(a)
for the prover and verier. Choose primes corresponding
to the possible subset queries. Each user i is
issued
. The value
a mod N is made public. To prove membership
in P j , a user demonstrates possession of an 'p j th root
of T mod N . The security analysis is similar to earlier
cases.
Other Variations It is possible to generalize further,
based on the idea of key distribution patterns (KDP's)
due to Mitchell and Piper [23]. This yields a subset query
scheme to prove membership in any qualied subset while
resisting an outsider attack by any disqualied subset.
KDP's are also a unifying idea for key predistribution
schemes [4] and broadcast encryption [19]; see Stinson [26]
for a helpful survey.
As with the basic scheme, the Fiat-Shamir heuristic
can be applied to our subset query schemes. The result is
a group signature scheme in which the signature demonstrates
membership (or non-membership) in a subset of
the signer's choice.
4 Defending against Masking Attacks
The schemes presented in Section 2 and Section 3 are
only 1-resilient with respect to a masking attack. In other
words, two users can combine their secret keys to create a
new key that hides their identity from the escrow agent.
This weakness is due to the fact that if are 'th
roots of T mod N , then so is k
That gives an easy way for two or more
colluders to mask their identity by producing a new key
that cannot be traced back to them.
One way to protect against masking attacks is by using
constructions for collusion-secure ngerprinting due to
Boneh and Shaw [6]. We illustrate for the basic scheme
from Section 2. Assign to each user i a distinct length-K
codeword in a collusion-secure nger-
printing code over a binary alphabet. Create K instances
of our basic authentication scheme, where j is a nontrivial
'th root of unity modulo N j , and where t j is an 'th
root of T j mod N j , 1 j K. The secrets that are given
to user i will be t1 c i1
K mod Nk . To
authenticate, the verier and the prover execute the K
instances of the basic protocol in parallel.
The important observation is that if all of the collud-
ers' codewords agree in position j, then all of the colluders
have the same secret in the jth instance of the basic
scheme. Then they cannot nd a new 'th root for T j by
taking convex combinations. If the verier accepts, the
colluders must have used their common secrets in every
instance where all of their codewords agree. The escrow
agent will recover these codeword values, and might recover
arbitrary values for all of the other codeword posi-
tions. By the properties of ngerprint codes, this su-ces
to recover a colluder's identity. To protect against a coalition
of size k out of a population of size m with probability
1 , the best known ngerprinting construction requires
We can eliminate the need for multiple protocol instances
if we proceed as follows. Let K be the
rst K odd primes, and let
We construct a
single instance of the basic authentication scheme using
a modulus divides both p 1 and q 1.
As in the basic scheme let be an 'th root of unity, and
let t be a random element of ZN . The secret given to
user i is
all j. As in the multi-instance construction, the escrow
agent will be able to use the underlying ngerprint code
to recover a colluder's identity from a convex combination
of k or fewer secrets. Hence, we achieve k-resilience using
a single instance of the protocol. The downside is that
the size of the modulus N depends on the length of the
ngerprinting code K.
5 Summary and open problems
In summary, the higher-residuosity assumption leads to
new constructions for anonymous authentication with
identity escrow and group signature schemes. Variations
of the basic scheme include the ability for a user to
prove membership in an arbitrary subset of users, which
gives a simple approach to the key revocation problem.
These constructions are e-cient, and provably secure under
standard security assumptions, although somewhat
more awkward to protect against collusion than previous
constructions.
It is an important open question to improve the defense
against masking attacks while maintaining the eciency
of the basic authentication protocol. It is tempting
to try to achieve this by encoding more information
in each user's secret, but this could be quite dan-
gerous. For example, let Let each
j be a non-trivial 'th root of unity modulo p j . Let
each j be a Chinese Remainder Theorem coe-cient:
each
This could directly
encode an element of Z K
' in each user's secret.
However, two colluding users could factor the modulus:
It seems as though a large ' and a secret assignment of
encoded strings to users is necessary to prevent this factoring
attack. Then it is possible that this approach could
be developed into a defense against masking attacks, by
carefully choosing the encoding information given to each
user. This approach might also be useful in a setting with
multiple escrow agents. By the choice of which factors go
to each agent, and by the choice of each user's encoded
string, the ability of dierent escrow agents to recover
information about a user can be nely controlled.
--R
--TR
An optimal class of symmetric key generation systems
Key storage in secure networks
A practical zero-knowledge protocol fitted to security microprocessor minimizing both transmission and memory
Zero-knowledge proofs of identity
The knowledge complexity of interactive proof systems
Flexible access control with master keys
Broadcast encryption
On Some Methods for Unconditionally Secure Key Distribution and Broadcast Encryption
Communication-efficient anonymous group identification
Trustee-based tracing extensions to anonymous cash and the making of anonymous change
Cryptographic solution to a problem of access control in a hierarchy
Proofs of Partial Knowledge and Simplified Design of Witness Hiding Protocols
Collusion-Secure Fingerprinting for Digital Data (Extended Abstract)
Efficient Generation of Shared RSA Keys (Extended Abstract)
Efficient Group Signature Schemes for Large Groups (Extended Abstract)
Escrow
How to Convert any Digital Signature Scheme into a Group Signature Scheme
Some Open Issues and New Directions in Group Signatures
Verifiable secret-ballot elections
--CTR
Wen-Guey Tzeng, A secure system for data access based on anonymous authentication and time-dependent hierarchical keys, Proceedings of the 2006 ACM Symposium on Information, computer and communications security, March 21-24, 2006, Taipei, Taiwan
Joachim Biskup , Ulrich Flegel, Threshold-based identity recovery for privacy enhanced applications, Proceedings of the 7th ACM conference on Computer and communications security, p.71-79, November 01-04, 2000, Athens, Greece
Y. G. Desmedt, Fighting entity authentication frauds by combining different technologies, BT Technology Journal, v.23 n.4, p.65-70, October 2005
Marina Blanton , Mikhail J. Atallah, Provable bounds for portable and flexible privacy-preserving access, Proceedings of the tenth ACM symposium on Access control models and technologies, June 01-03, 2005, Stockholm, Sweden
Marina Blanton , Mikhail Atallah, Succinct representation of flexible and privacy-preserving access rights, The VLDB Journal The International Journal on Very Large Data Bases, v.15 n.4, p.334-354, November 2006
Keith Frikken , Mikhail Atallah , Marina Bykova, Remote revocation of smart cards in a private DRM system, Proceedings of the 2005 Australasian workshop on Grid computing and e-research, p.169-177, January 01, 2005, Newcastle, New South Wales, Australia | group signature;identity escrow;anonymous authentication |
319961 | Automatically extracting structure and data from business reports. | A considerable amount of clean semistructured data is internally available to companies in the form of business reports. However, business reports are untapped for data mining, data warehousing, and querying because they are not in relational form. Business reports have a regular structure that can be reconstructed. We present algorithms that automatically infer the regular structure underlying business reports and automatically generate wrappers to extract relational data. | Introduction
A considerable amount of clean semistructured data is available to companies through internal
business reports created during periodic data processing. Business reports provide data for
monitoring account balances, inventory levels, transaction status, current production status,
etc. Although the subject matter may di#er widely, many business reports share a similar
structure.
Businesses that employ state-of-the-art techniques capture reports in a Computer Output
to Laser Disk (COLD) 1 storage system that is accessible through an enterprise-wide
network. COLD systems support queries based on date, title, free-text scanning (as in
regular-expression matching), and precomputed indexes whose definitions have been constructed
manually at a significant cost in labor and systems-administration/maintenance
e#ort.
Because business reports are an integral part of the business process, when errors are
discovered, corrections must be made, and a new version of the report must be issued.
Compared to other sources of information, business reports are clean. 2 If this clean data
were available in relational form, it could feed a data warehouse.
For various reasons, in some cases important historical and operational data is only available
in a COLD system. In other cases, even when such data is available in legacy database
systems or file-processing systems, the variety of di#erent data sources and "middleware"
access layers can make it di#cult to assemble and integrate information from an organiza-
tion's databases. Since an organization's business reports provide a clean, comprehensive,
integrated view of the underlying data of interest, wrappers to extract this data could be less
expensive (and are sometimes the only option).
Giving a user finer granularity access to a business report allows more precise queries. If a
business report could be automatically decomposed into relational records, then it would not
be necessary for a company to have a mediator constructed for each and every one of its data
sources. Automatic decomposition would make possible the movement of information from
a COLD system into a relational database where data mining and other information tools
are available. Alternatively, automatic decomposition would support a direct SQL interface
to a COLD system, permitting data mining and queries directly on the COLD archive.
automatic decomposition, an end user must develop ad hoc techniques to extract
systems may use other storage technology besides optical disk, for example tape or RAID;
we use the term "COLD" to denote any kind of report archive system. Recently this has also been termed
"enterprise reporting," but for brevity we use "COLD."
As used in data warehousing, "clean" data is free of errors and redundancy, and is suitable for storing in
the warehouse.
information from a business report. For example, she may manually place data from a
business report into her spreadsheet. If she receives the business report electronically, she
may programmatically transfer the data to a database application such as Access [1] using
specialized tools such as awk [3], perl [25], Cambio [8], InfoXtract [6, 15], or Monarch [19]. The
di#culties she faces in an ad hoc approach are: the manual specification, the e#ort to set up a
process, the e#ort to maintain the process, and the acquisition of su#cient programming skills
to modify the process. Automatic decomposition eliminates the report-definition specification
inherent in manual or programmatic report-based information extraction.
Automated extraction is possible in narrow application domains [12, 13, 14]. However,
the techniques for narrow application domains are infeasible for large report bases because
ontologies would have to be manually constructed for each di#erent business report. Semi-automatic
techniques for wrappers have also been explored [2, 5, 11, 16, 22], but these
techniques do not take advantage of the special structural properties of business reports. The
project most closely related to ours is NoDoSE [2], which attacks the more general problem
of extracting structure from any kind of semistructured document. We apply techniques
specific to the business-report domain.
This paper presents a system that utilizes a lattice of field descriptions to automatically
identify fields. From field-level descriptions, this system then infers line types that describe
the kinds of lines found in a particular business report, and it infers and factors out page headers
and footers, yielding a line-type sequence whose regular structure can be inferred using
standard algorithms [17, 18]. Our system, implemented in Java, stores extracted information
in relational tables according to line type and line-group structure.
The remainder of this paper has four sections. Section 2 gives a high-level system overview.
Section 3 gives a detailed description of key algorithms and data structures. Section 4 gives
the results of a report survey. Section 5 gives our conclusions.
Business
Report
Field
Description
Lattice
Extract Fields
Report
Structure
Definition
Report
Decomposition
Populated
Database
Infer Line Types
Infer Page Headers
and Footers
Infer Recursive Groups
Figure
1: Business report structure and data extraction process.
Overview
Figure
1 outlines the two phases of the business report decomposition process: (1) the four
steps of report-structure inference, and (2) report decomposition. The input is a business
report R, about which we make five assumptions:
1. R is composed of fields that are aggregated into lines, which are in turn aggregated
into larger structures.
2. R is in printable ASCII and represents meaningful human-readable information. 3
3. R uses the ASCII form-feed character (FF) as the page delimiter, and the ASCII linefeed
character (LF) as the line delimiter. 4
4. Each page has the same number of lines R L
, and the width of each line is W characters,
padded with blanks if necessary.
5. Blank lines and blanks between fields are for human readability only.
3 We use ASCII, but EBCDIC or another character set could be used in similar fashion. In the case of
EBCDIC, it is easy to translate from EBCDIC to ASCII format. In the case of non-English character sets
or non-U.S. business reports, di#erent regular expressions would be required.
4 There is no di#culty in using CR-LF as a line delimiter as on PC systems.
006 9994 10355 JASON MASON CONSTRUCTION INC 100,000.00 .06005 03/07/99
MS
MS
* TOTAL LARGE CD * 1,605,529.79
Figure
2: A typical type I report page.
Figure
3: A portion of a typical type II report page.
We have observed two major categories of business report structure, distinguished by the
relationship between line and record structures. Both kinds of business reports have possible
page headers, possible page footers, and a report body that consists of repeating detail lines.
A type I detail line has columns (fields), belongs to a distinct line-type category, and contains
information about a single record. In contrast, a type II detail line contains fields pertaining
to several records. A type I business report contains only type I detail lines. A type II
business report contains type II detail lines (and may also contain type I detail lines).
Figure
2 gives an example of a simple type I report. 5 Each detail line in Figure 2 describes
a particular certificate of deposit. Figure 3 shows a portion of a type II checking-account
statement. Each detail line in Figure 3 lists two or three cleared-check items, each of which
has a check number, amount, and date cleared.
When we correctly identify the basic line types that exist within a type I report, then
we can extract the report structure. In contrast, extracting the structure of a type II report
requires information beyond line classification. In this paper we discuss type I reports. Type
II reports are the subject of a separate paper.
Our process starts with a type I business report R, and a field-description lattice F
5 None of the data in this paper is actual customer data, but the patterns are based on actual business
report structures not designed by us.
(described in Section 3.1), infers the structure of R, stores its definition in a relational
database, decomposes R, and stores its decomposition in the database. The contents of R
can now be queried. This paper focuses on the four steps of the report-structure inference
phase, which consists of the following four steps (corresponding to Algorithms 1 through 4
respectively).
1. For each line t of R, decompose t into its sequence of fields.
2. Infer B, the set of basic line types of R. For each line t of R, assign t its basic line type
from B.
3. Infer page headers and footers for R. Factor out the page structure from R's line type
description.
4. Infer R's recursive line groups.
This system is implemented in Java 2, using the OROMatcher 1.1 regular-expression
library [21] for matching and extracting substrings from lines. We used mySQL [20] for the
database management system and twz1jdbcForMysql [23] for the JDBC interface to mySQL.
Source code is available on our Web site [10].
2.1 Notation
Before proceeding, we introduce notation and terminology. In general, let
#R[1], ., R[R P
]# 6 be a business report with R P
pages, each with R L
lines, R[i] denoting
the i'th page. Each page is a sequence of lines, so
]#. Each line is
a sequence of W characters; after executing Algorithm 1 we can also represent a line as a
sequence of fields:
Given a line t, we denote a substring of t from position j to k, 1
t[j, k]. A field f in t is a 4-tuple (j, k, i, s), where is the substring of t to which f
6 We always denote an ordered sequence with angle brackets #. Also, all indexes are 1-based.
corresponds, j is the starting position of f , k is the ending position, and i is a pattern index
to be defined in Section 3.1.
fields in lines t 1 and t 2 respectively. We
say that f 1 and f 2 overlap if there is a q such that
overlap, the overlap has one of five alignment values:
(share left and right endpoints),
ALIGN they do not AGREE, but both fields are numeric and aligned at
the decimal-point position,
LR they do not AGREE nor ALIGN, but
CENTER they do not AGREE, ALIGN, nor are LR, but
(center aligned),
OTHER none of the above apply.
A field type f for fields f
is index of the least upper bound of
the elements in the field-description lattice F that are indexed by i 1 , ., i n . F is defined in
Section 3.1. For each q, 1 # q # n, let s # q be s q padded with j q - j blanks on the left and
blanks on the right; then
A line type t for lines t 1 , ., t m is a sequence of field types f
(j with two properties: (1) none of the field types may overlap, and (2) s 1 , ., s n
are each ordered sequences containing m strings that correspond respectively to all the fields
in lines t 1 , ., t m . By these two properties we guarantee that we can reconstruct the original
lines from a line type.
A group type d for R is a triple (a, b, c) where a is either a line type or an ordered sequence
of group types, and b and c are respectively the minimum and maximum number
of consecutive occurrences of d observed in R.
3 Structure Extraction Algorithms
As outlined in Section 2, four algorithms extract a business report's structure. Sections 3.1
through 3.4 describe Algorithms 1 through 4 respectively.
3.1 Field Detection
Consider the type I report of Figure 2. The first task is to decompose each line into fields.
This is done by applying Algorithm 1 to each line of R.
Let F be the field-description lattice of Figure 4. Indentation in Figure 4 represents
precedence, and the universal lower bound is the empty expression (not shown explicitly).
Each element of F is a class that describes a set of ASCII strings typically found in business
reports. Julian is the only class with two immediate successors (Date and Number). The
parenthesized numbers in Figure 4 are used in Section 3.2.1.
E[e]# be a sequence of regular expressions corresponding to the field-
description lattice of Figure 4 (except for the universal upper bound Any and the universal
lower bound #). E[e], the last element of the sequence E, has the property that it recognizes
any sequence of contiguous non-blank characters (E[e] corresponds to the class String in this
case).
Table
1 in Appendix A shows the regular expressions of E. Notice that no expression
E[i] in E matches a string of only blank characters. Algorithm 1 extracts the fields in line t
according to E.
Algorithm 1. Extract fields from line.
Input: Regular-expression sequence E and line t.
Output: The sequence of disjoint fields that comprise t relative to E.
to e do
while E[i] matches t do
Set j to the start of the first match.
Let k be the largest k # W such that E[i] recognizes t[j, k].
Record the field as the 4-tuple (j, k, i, t[j, k]).
Replace the characters of t[j, k] with a special non-ASCII symbol.
while
end for
Sort the fields by j, the beginning field position.
Any (1)
String
Time (.3)
Hour Minute Second (0)
Hour Minute (0)
Date
Julian (0)
Day Month Year (0)
Month Day Year (0)
Year Month Day (0)
Month Year (0)
Month Day (0)
Day Month (0)
Phone Number (.3)
Phone with Area Code (0)
Phone without Area Code (0)
ID Code (.3)
ID Begins with Letters (0)
ID Ends with Letters (0)
ID with Digits, Dashes (0)
Number
Julian (0)
Percent (0)
Negative (0)
General Number (0)
Fraction (0)
Currency (0)
Currency with Dollar Sign (0)
Page Number (0)
Field Label (0)
Dividing Line (0)
Figure
4: Field-classification lattice.
Regular-expression matching can be linear in the length of the text to be matched (if we
accept exponential space in pathological cases) [4], so the inner loop runs in O(W ) time.
Since there are e expressions, the outer loop executes e times. Thus, Algorithm 1 executes in
O(eW ) time. Since E[e], the last regular expression, always recognizes contiguous non-blank
characters, Algorithm 1 terminates and extracts all fields from t. The step that replaces the
characters of t[j, k] with a special symbol forces the fields to be disjoint since t[j, k] can no
longer be matched by any expression.
3.2 Basic Line-Type Inference
Algorithm 2 is the heart of our technique. It infers basic line types that describe categories
of lines in a business report. Before presenting the algorithm we define three field- and
line-distance measures.
We first introduce two di#erent distances between fields: a first-order distance, and a
second-order distance. first-order distance measures field distance using a character-level
string comparison. Second-order distance yields a similarity metric based on the field-
classification lattice. A traditional method for characterizing string similarity is edit distance
[24], which describes the cost of transforming one string into the other. But the computation
of edit distance is O(mn) where m and n are the lengths of the strings being compared. Our
simple but adequate first-order distance can be computed in O(max(m,n)) time.
We measure field distances using the minimum of first- and second-order distances together
with alignment information (e.g. are the two fields left justified or decimal-aligned).
Based on this field distance metric we define a line distance, used in Algorithm 2 to decide
when two line types belong to the same cluster.
3.2.1 Field Distance
Let s 1 and s 2 be non-empty ASCII strings. Without loss of generality, we assume that
|. The first-order distance between s 1 and s 2 , is:
string
(1)
where # K
(a, b) is the Kronecker delta function, namely 1 if a #= b and 0 if a = b.
field types. Recall that s 1 and s 2 are
ordered sequences of strings. The first-order field distance between f 1 and f 2 is:
string
Our first-order distance uses a (trivial) lattice on characters and ignores the higher-order
structure associated with fields. Our second-order distance uses the regular-expression sequence
field-description lattice F described in Section 3.1. E and F have three
important properties: 7
1. Lattice. Each pair of elements in F has a unique least upper bound.
2. Covering. Every ASCII string is a member of at least one class in F .
3. Consistency. Let F [i] and F [j] be classes in F , and let E[i] and E[j] be the regular
expressions in E that correspond to F [i] and F [j], respectively. If F [i] precedes F [j]
then the language recognized by E[i] is a subset of the language recognized by E[j].
We define a function # that assigns each element of F a value; more specific classes have
lower values than more general classes. Values for #, in the interval [0, 1], are shown in
parentheses in Figure 4, and were determined empirically.
Given these properties, we define the second-order field distance. Let f
be two field types. be the class in F corresponding
to the regular expression E[i 1 lub be the least upper
bound of F [i 1 ] and F [i 2 ]. Without loss of generality, we assume that #(F [i
The second-order field distance between f 1 and f 2 is:
The di#erence component of Equation 3 returns a low value for fields whose classes are
relatively close. The P term is an empirical constant to penalize fields whose least upper
bound is relatively general; we assigned P a value of 1.1 in our experiments. Finally, to
ensure that a distance stays in the interval [0, 1], Equation 3 uses the min(1, .) expression.
7 These properties require careful construction of the field-description lattice and regular-expression se-
quence, and we do not formally prove that they hold. For our purposes it is su#cient simply to assume
that these properties hold. For the lattice F in Figure 4 the class Any is defined to be the set of all ASCII
strings. The consistency property can be guaranteed if we replace each superior regular expression s by the
disjunction of s with each regular expression i that is inferior to s.
Given the first- and second-order field distances of Equations 1 and 3, we define
# field (f 1 , t 2 ), the field distance between field type f
the sequence of field types of line type t 2 , as follows. If f 1 either overlaps no field types
of t 2 or overlaps more than one field type of t 2 , we define # field (f 1 , t 2 ) to be 1. Oth-
erwise, let f be the single field type in t 2 that overlaps f 1 , and let
Equation 4 gives the definition of # field (f 1 , t 2
where A is the alignment value of the overlap of f 1 and f 2 , defined in Section 2.1. We
determined alignment values empirically, choosing 0, 0, .1, .2, and .4 for AGREE, ALIGN,
LR, CENTER, and OTHER, respectively.
3.2.2 Line Distance
be the number of field types in line type t 1 , and let n 2 be the number of field types
in line type t 2 . Based on # field , we define # line (t 1 , t 2 ), the line distance between t 1 and t 2 . If
both n 1 and n 2 are 0, then the value of # line is defined to be 0. If either n 1 or n 2 is 0 but
not both, then the value of # line is defined to be 1. Otherwise, # line is defined according to
Equation 5:
line
3.2.3 Line-Type Inference
Algorithm 2. Infer B, the set of basic line types for report R.
Input: R, after Algorithm 1.
Output: B, the set of basic line types for R, and
L, a mapping from the lines of R to line types in B.
Make a copy Q of R:
for each R[i][j], 1
do
Create a new line type t 1 for R[i][j].
duplicates a line type t 2 in Q then
Generalize t 2 to cover t 1 .
else
Add t 1 to Q.
end for
Reduce line types in Q to B:
for each line type Q[i], 1 # i # |Q| do
Let m be the smallest # line (Q[i], t) from Q[i] to any line type t in B.
Add Q[i] to B.
else
Generalize t to cover Q[i].
end for
Construct array L so that L[i][j] is the line type in B that covers line R[i][j].
Algorithm 2 terminates in O(G - R P - R L
time, where G is the cost of the "generalize"
operation, described below. .3 is a threshold chosen empirically. Line types t 1 and t 2
are duplicates if and only if their associated field-type sequences are identical up to the field
text, i.e. (a) they have the same number of field types, (b) the corresponding field types have
the same left and right positions, and (c) the corresponding field types are both recognized
by the same regular expression.
We now define what it means to generalize a line type t 2 to cover line type t 1 . For each
field type f be the number of field types in t 1 that overlap f 2 .
We denote these m field types as f
There are three possibilities for m:
do nothing with f 2 .
2. to the least upper bound of
padded with blanks as needed.
3. m > to e; pad the
strings in s 2 with blanks as needed, and add to s 2 the strings from s 1,1 , ., s 1,m , joined
and filled/padded with blanks as needed.
If any field type f 1 in t 1 was not overlapped by some field type in t 2 , add f 1 to t 2 . After
modifying t 2 , if any field types in t 2 now overlap each other, combine them as described above
in step 3.
3.3 Page Header/Footer Inference
A type I business report may have page header and/or a page footer. A page header for
report R is a sequence of line types that appears at the beginning of each page in R. If
line-type sequence A = #a 1 , ., a h # is a page header for R, then (#i,
. Similarly, a page footer is a sequence of line types that appears
at the end of each page in R (we assume that a page footer always starts at the same o#set
from the top of page). To distinguish between report detail and page headers or footers, we
require that each non-blank line type t # A # Z have the following properties for each page
R[i]:
. t does not repeat in R[i] two or more times in immediate succession, and
. t appears only once or twice on any single page R[i].
Algorithm 3. Infer page headers and footers from line types.
Input: Array L from Algorithm 2.
Output: Page-header sequence A, page-footer sequence Z, and
line-type sequence -
L with A and Z factored out.
Mark non-blank line types that cannot be page header/footer candidates:
If and L[i][j] is non-blank, then mark both L[i][j] and L[i][j
If (#j, k, l)j #= k #= l and
Infer page header:
Find the largest h, 0 # h # R L
such that (#i,
Set A to the first h line types of page R[1] may be empty).
Infer page footer:
Find the smallest f, h # f # R L
such that (#i, j)L[i][f
If such an f exists, set Z to line types f through R L
of page R[1];
otherwise let Z be the empty sequence and set f to R L
1.
Reduce L to -
L by removing page structure and blank lines:
L be the sequence of line types
Remove all blank line types that appear in -
L.
Note that whereas L is a two-dimensional array, -
L has only one dimension. Algorithm 3
terminates in O(R L -R P
time.
3.4 Recursive Group Inference
After the page-specific structure of a business report R has been factored out, we can focus
on inferring the structure of R's detail section. Miclet's technique [17, 18] is a reasonable and
general way to infer regular structure from a set of example strings. Because of the nature of
business reports and the simplifying assumptions this allows, it is possible to infer structure
from a single example. Our Algorithm 4 is a variant of Miclet's technique, using di#erent
decision heuristics governing when we should reduce a recursive group, and restricted to a
single example string (the array -
L of line types).
Business reports created with a report-writer 8 are built up from groups of the form uv k w,
where u is a (possibly empty) group header section, v is a detail section that repeats one
or more times, and w is a (possibly empty) group footer section. Each of the u, v, and w
sections may themselves be composed of other uv k w structures. We make three assumptions
about the uv k w structure of line-group types for a business report R reduced by Algorithms
1 to 3 to -
L:
1. k # 2; that is, v appears consecutively somewhere in -
L.
2. If group v appears k # 2 times consecutively in -
L, it forms the v k component of a uv k w
structure (and there is no predetermined upper bound for k). Also, uvw (where
may appear in -
L as long as uv k appears elsewhere in -
L.
3. Groups u, v, and w may not appear in -
individually (outside of a uv k w sequence).
There are no optional lines in a group. If the real report structure is uv k w, u and w
always appear together with v k . 9
We give three examples, representing a line types with lowercase letters. Example 1.
The sequence abccc is a line group with a group header
and an empty group footer, #. The reason for this particular uv k w solution is that c
Most business reports created by custom programming also follow these conventions.
9 This assumption does not hold for all type I business reports, but we leave such reports for future
investigation.
is the only repeating line type in our example. Example 2. The sequence abccabcccc is
(abc+)(abc+). Thus, an expression to describe the structure of
such a report is (abc+)+. Example 3. The sequence abccbccdabcd is formed by repeating
and nesting. We first create the inner group section c.
By substitution, the sequence is now aeedaed. Let d, with header a, detail section
e k , and footer d. By substitution, the sequence is now ff , which is a group with an empty
header and footer, and a detail section f k . The expression describing this report structure is
Essentially, Algorithm 4 reduces the regular expression defined by the line-type sequence
L to a more compact regular expression G that describes the recursive structure of -
L.
Algorithm 4. Infer recursive line-type groups.
Input: Basic line-type sequence B from Algorithm 2
and line-type sequence -
L from Algorithm 3.
Output: Recursive line-group structure.
Let g be a set of group types that contains one entry for each line type in B.
L to G by substituting each line type with its corresponding group type from g.
while |G| > 1 and changed = true do
to |G| do
Find the smallest j > i such that conditions 1, 2, and 3 hold.
Condition 1. Every group type in v is unique.
Condition 2. The sequence vv occurs in G.
Condition 3. After substituting a new group type for each occurrence of
in G, there is no group type in v that still occurs in G.
exists then
Create a new group type x whose definition is the sequence v.
Add x to g.
Substitute x in G everywhere v appears.
Replace all consecutive occurrences of x.x in G by a single x, and mark x
with the minimum and maximum consecutive-occurrence counts.
end for
while
Algorithm 4 is a least-fixed point algorithm, where the fixed point upon which we converge
is a regular expression to describe -
L. Because |G| is initially | -
L|, the while loop can execute
at most | -
L| times (since we only loop as long as a change has been made, and a change must
always reduce the size of G by at least 1). The for loop also executes at most | -
L| times.
Verifying Conditions 1, 2, and 3 and substituting x for v can both be done in -
time. Thus,
Algorithm 4 executes in O(|B|
In practice, Algorithm 4 usually took between
one and three passes to converge, running orders of magnitude faster than the worst case
just described. (This is because v k sequences tend to be long.)
There are five areas where we used empirically determined values to control the business-
report structure and data extraction process: (1) the regular expressions used to recognize
fields, (2) the values (#) associated with each class in the field-description lattice F , (3) the
value of alignment constants (AGREE, ALIGN, LR, CENTER, OTHER), (4) the threshold
for line-type generalization, and (5) the penalty for least-upper-bound generality in Equation
3. We used hundreds of reports from four di#erent organizations as the basis for our
choices.
To test our process, we used 76 business reports from a separate organization that had
not been used in the training phase. Of these 76 reports, 7 were not type I. An additional 7
reports were too short to be meaningful (i.e. they comprised a single page containing only
page headers or a single detail line). Of the 62 remaining reports, our process correctly
extracted the structure and data for 40 reports, but failed with 22. The 22 failures point out
directions for future enhancement. We discuss four.
1. E, our sequence of regular expressions for matching fields, was sometimes insu#cient.
We give four examples. (i) In one case, two fields that were usually separated by a single
blank space had a number sign (#) instead of a space on one line. This caused the two fields
to be recognized as a single string, which in turn caused the creation of an extra line type that
interfered with the recursive line-type group inference (Algorithm 4). (ii) In another case, we
discovered decimal-aligned numeric fields that were left-filled with underscores. Furthermore,
these underscores abutted the string field on the left (e.g. "One 5.52" and "Two 934.22").
Our system recognized the string portion together with the padding underscores as a single
field, and the numbers as a second field. Because of the overlapping of these fields, our system
generated too many line types for this report. (iii) In another case, we discovered currency
amounts specified with 4 digits after the decimal point, rather than the more common 2
digits. Due to the order of our expressions, our system broke such fields in two, which caused
too many line types to be generated. (iv) Finally, we found a string field that had two
internal spaces (e.g. "XXXX XX"), but our String pattern only expects one internal space.
This caused the field to be split and an extra line type to be generated. All of these problems
can be corrected by tuning E. For our test set, the amount of tuning required would have
been small. Adjustments to E are also required for non-U.S. business reports.
2. By far the most common reason for our process to fail was the problem of optional
fields in a line type. With more fields present on a line, our distance formulas are more
tolerant to optional fields. However it is often the case that lines with few fields also have
optional fields, and for lines with many fields, it is also often the case that several fields are
optional. Optional fields may lead to our system generating too many line types. Tuning the
threshold T of Algorithm 2 for a particular report can sometimes fix this problem, but it is
not a general solution.
3. There were several cases where we did not generalize two line types because of the
simplistic structure of Algorithm 2, which decides when to generalize based on a threshold.
In a future study we will apply clustering techniques such as recursive partitioning or nearest
neighbor (as in [9]) to find a better decision function to control when we generalize line types.
Such techniques are more likely to be general across business reports with very di#erent line
types, and will not be as sensitive to the order of processing line types.
4. Sometimes our uniformity assumption for line-type groups did not hold. That is,
Algorithm 4 assumes that if uv k w is a line-type group, then u, v k , and w always appear
together. In some cases lines in a uv k w structure are optional, and in other cases (especially
for short lines) a single line type may be reused in two distinct uv k w structures. Algorithm 4
needs to be revised to accommodate optional lines in a line-type group.
Conclusions
It is possible to automatically extract structure and data from business reports. Our process
correctly extracted the structure and data in 40 out of 62 type I business reports in a test
set we had not seen before.
While these initial numbers are encouraging, much work remains to be done. In Section 4
we mentioned four areas needing improvement: (1) field recognition, (2) detecting optional
fields, (3) improved line-type clustering techniques, and (4) handling optional lines within a
line-type group. We also plan to study structure and data extraction for type II reports. Here
it may be possible to use segmentation techniques like those applied in document imaging
and optical character recognition (OCR) algorithms (e.g. [7]). This may also enable more
accurate extraction of fields from lines, and may shed light on improved techniques for type I
line-type clustering. In the current investigation we have assumed fixed-width fields (padded
with blanks as needed); since some reports have variable-width fields, our process needs to
be extended to accommodate such reports. Also, our assumption that fields are separated by
white space does not always hold (some reports are designed to be printed on forms, which
may have lines between characters to divide fields). Future work should examine ways to
determine field boundaries in the absence of white space.
One weakness of our approach is the number of fixed, empirically determined constants
associated with our algorithms. We can surely achieve better results by using adaptive
techniques to dynamically compute and adjust these constants whenever possible.
After we have more fully mapped out structure and data extraction for type II reports,
we will construct a compressed data structure that contains a full inverted index of the
information in a business report R, together with su#cient information to reconstruct the
original pages of R from the inverted index. Often it is not enough to merely return the
data associated with a particular page; regulatory constraints (at a financial institution,
for example) may require that original pages be returned (e.g. records may be subject to
subpoena in legal proceedings, in which case the original report pages must be printed). Thus,
after fully inverting the data in a report, we must still be able to retrieve the original report
pages, including white space. The extensive structural information our system generates
constitutes an excellent domain-specific model for compressing reports.
Our business-report structure and data extraction system is implemented in Java. We also
implemented a graphical pattern editor tool to assist in the creation and debugging of regular
expressions for field extraction. This tool, available from our Web site as PatternEditor 1.0
[10], has general applicability for regular-expression debugging beyond our current project.
--R
Microsoft Corporation access page.
The AWK Programming Language.
Wrapper generation for semi-structured internet sources
Mining mainframe reports: Intelligent data extraction from print streams
Background structure in document images.
Data Junction Corporation home page.
An iterative clustering algorithm for interpretation of imperfect line drawings.
Data Extraction Group home page.
A scalable comparison-shopping agent for the World-Wide Web
A conceptual-modeling approach to extracting data from the web
IA Corporation home page.
Wrapper induction for information ex- traction
Regular inference with a tail clustering method.
Structural Methods in Pattern Recognition.
DataWatch Corporation home page.
URL: http://www.
Learning to extract text-based information from the World-Wide Web
URL: http://www.
The string to string correction problem.
Programming Perl.
--TR
Compilers: principles, techniques, and tools
The AWK programming language
Programming perl
A scalable comparison-shopping agent for the World-Wide Web
Wrapper generation for semi-structured Internet sources
NoDoSEMYAMPERSANDmdash;a tool for semi-automatically extracting structured and semistructured data from text documents
Ontology-based extraction and structuring of information from data-rich unstructured documents
The String-to-String Correction Problem
Conceptual-model-based data extraction from multiple-record Web pages
A Conceptual-Modeling Approach to Extracting Data from the Web | automatic wrapper generation;regular expressions;report structure;data and information extraction;business reports |
319974 | Requirement-based data cube schema design. | On-line analytical processing (OLAP) requires efficient processing of complex decision support queries over very large databases. It is well accepted that pre-computed data cubes can help reduce the response time of such queries dramatically. A very important design issue of an efficient OLAP system is therefore the choice of the right data cubes to materialize. We call this problem the data cube schema design problem. In this paper we show that the problem of finding an optimal data cube schema for an OLAP system with limited memory is NP-hard. As a more computationally efficient alternative, we propose a greedy approximation algorithm cMP and its variants. Algorithm cMP consists of two phases. In the first phase, an initial schema consisting of all the cubes required to efficiently answer the user queries is formed. In the second phase, cubes in the initial schema are selectively merged to satisfy the memory constraint. We show that cMP is very effective in pruning the search space for an optimal schema. This leads to a highly efficient algorithm. We report the efficiency and the effectiveness of cMP via an empirical study using the TPC-D benchmark. Our results show that the data cube schemas generated by cMP enable very efficient OLAP query processing. | Introduction
With wide acceptance of the data warehousing tech-
nology, corporations are building their decision support
systems (DSS) on large data warehouses. Many of
these DSS's have on-line characteristics and are termed
On-line Analytical Processing (OLAP) systems. Different
from the conventional database applications, a
DSS usually needs to analyze accumulative information.
Very often, the system needs to scan almost the entire
database to compute query answers, resulting in a very
poor response time. Conventional database techniques
are simply not fast enough for today's corporate decision
process.
The data cube technology has been becoming a core
component of many OLAP systems. Data cubes are
pre-computed multi-dimensional views of the data in
a data warehouse [5]. The advantage of a data cube
system is that once the data cubes are built, answers
to decision support queries can be retrieved from the
cubes in real-time.
An OLAP system can be modeled by a three-level
architecture that consists of: (1) a query client; (2) a
data cube engine; and (3) a data warehouse server.
The bottom level of an OLAP system is a data warehouse
built on top of a DBMS. Data in the warehouse
comes from source operational databases. (In the simplest
case, the data warehouse could be the DBMS it-
self.) The warehouse needs to support fast aggrega-
tions, for example, by means of different indexing techniques
such as bit-map indices and join indices [11, 12].
The middle level of an OLAP system is a set of data
cubes, generated from the data warehouse. These cubes
are called base data cubes. Each base cube is defined by
a set of attributes taken from the warehouse schema.
It contains the aggregates over the selected set of at-
tributes. Other aggregates can be computed from the
base cubes. The set of base data cubes together define
a data cube schema for the OLAP system.
The top level of an OLAP system is a query client.
The client, besides supporting DSS queries, allows users
to browse through the data it caches from the data
cubes. Therefore, a query could be a very complicated
DSS query or a simple slicing and dicing request. A
query submitted to the query client, after being checked
against the data cube schema, will be directed to the
cube level if it can be answered by the data cubes there;
otherwise, the query is passed on to the warehouse where
the result is computed. Since data cubes store pre-computed
results, servicing queries with the cubes is
much faster than with the warehouse.
Various developments and research studies have been
made on the design of the three levels in an OLAP sys-
tem. Many commercial products are also now avail-
able. Some example query clients include Seagate Info
Worksheet [13] and Microsoft PivotTable Services [10].
Currently, these query client products mainly provide
browsing and report generation services on cached data.
In general, unless the answer is already cached, complex
DSS queries submitted to the client module will have
to be compiled into accesses to the data cubes or to
the warehouse. For the warehouse and the data cube
levels, there are products like Microsoft SQL OLAP Services
Hyperion Essbase OLAP Server[7], and IBM
DB2 OLAP Server [8]. At the warehouse level, many
vendors have been enhancing their DBMS products to
improve the performance on data aggregation [6]. As
for the data cube level, most of the research studies
focus on two issues: (1) how to compute aggregates
from a base cube efficiently[1], and (2) what data structures
should be used to represent the cubes, that is the
debate between relational OLAP (ROLAP) and multi-dimensional
OLAP (MOLAP)[1, 16].
As we have mentioned, the OLAP system would be
able to support real-time responses if the cube level can
intercept (or answer) all the queries. Unfortunately,
materializing all possible cubes so that all possible queries
can be answered by the cubes is clearly impractical
due to the high storage and maintenance costs. Instead,
one should carefully choose the right combination of
cubes so that query response time is optimized subject
to the constraints of the system's capacity (such as storage
space). We call the set of materialized base cubes
the data cube schema of the OLAP system. We also call
the problem of selecting a data cube schema the data
cube schema design problem.
The key to the design of a query-efficient OLAP
system thus lies on the design of a good data cube
schema. In particular, two very important questions
one needs to address are: on what basis shall we design
such a schema? And where should the schema be
derived from? We claim that the data cube schema
should not be based solely on the database schema in
the warehouse. Instead, a practical approach to the cube
design problem should be based on the users' query re-
quirements. For example, in the TPC-D benchmark
[15], the requirement is to answer the 17 DSS queries
that are specified in the benchmark efficiently. This is
because these queries presumably are driven from the
applications that use the data warehouse most often.
Given the user query requirements (i.e., a set of frequently
asked queries) and a set of system capacity constraints
(e.g., storage limitation), our goal is to derive
a data cube schema that optimizes query response time
without violating the system capacity constraints.
As we will see in Section 2.4, we prove that the optimization
problem is NP-hard. Finding the optimal data
cube schema is thus computationally very expensive. As
an alternative, we propose an efficient greedy approximation
algorithm cMP for the requirement-based data
cube schema design problem. Our algorithm consists of
two phases:
(1) Define an initial schema
The first phase is to derive an initial set of data
cubes (called initial schema) from the application re-
quirements. In this study, we assume that the requirements
are captured by a set of frequently-asked queries
or FAQs. The initial schema are selected such that
all the FAQs can be answered directly and efficiently.
In the TPC-D example, we can define a cube to answer
each one of the 17 DSS queries. For example, the
7th query of the TPC-D benchmark involve three at-
tributes: supp nation, cust nation, and shipdate yr.
(2) Schema Optimization
The second phase is to modify the initial schema so
that query response time is optimized subject to the sys-
tem's capacity constraints. the data cubes derived in an
initial schema may have lots of redundancy caused by
overlapping attributes. The total size of the cubes may
exceed the storage or memory limitation of the system.
Too many cubes would always induce a large maintenance
cost when the data in the underlying warehouse
changes. Therefore, it may be more cost-effective to
merge some of cubes. Cube merging may result in fewer
but perhaps larger cubes. In terms of query response
time, query processed using the merged cubes will in
general be slower than using the original cubes. Hence,
there is a trade-off between query performance and cube
maintenance. Schema optimization is to determine a
set of data cubes that replace some of the cubes in the
initial schema such that the query performance on the
resulted cubes is optimal under the constraint that the
total size of the resulted cubes is within an acceptable
system limit. 1 The set of data cubes obtained from the
optimization process is called the optimal schema for
the OLAP system.
The rest of the paper is organized as follows. In Section
2 we present a formal definition of the schema optimization
problem. Section 3 introduces the greedy algorithm
cMP for schema optimization. A performance
study of cMP is presented in Section 4. We use data
from the TPC-D benchmark in the study. Finally, we
conclude our paper in Section 5. Because lack of space,
some details are omitted. Readers can refer to [4] for a
1 It is reasonable to correlate the maintenance cost with the total
size of the cubes in the schema.
further information.
Optimization
2.1 Search space of an optimal schema
In this paper we assume that the requirements of the
OLAP system is captured in a set of frequent queries.
We use Q to denote the initial schema of the data cubes
derived from the queries. The second phase of our cube
design process is to refine the set Q so that the maintenance
cost (such as storage) of the cube set is within the
capacity of the system. We can consider the refinement
as an optimization problem with the set of all possible
cube sets as the search space.
To simplify the problem, we assume that the database
in the data warehouse is represented by a star schema
[2]. Attributes in the queries come from the fields of
the dimensions and fact tables. Usually, the number
of dimensions and fact tables is not large, However,
there may be many attributes in one table. For exam-
ple, the table part includes the attributes p partkey,
etc. The dimension
is in fact a multi-hierarchical dimension as shown
in
Figure
1. In addition, in the star schema, some
p_brand p_container
All
p_partkey
p_size
p_type
Figure
1: The multi-hierarchical structure of the part
dimension in TPC-D
attributes are stored directly in the fact table. For
example, the attributes l shipdate, l commitdate,
l receiptdate in the fact table lineitem are such at-
tributes. As a result, the number of attributes (dimen-
sions) needs to be considered in a data cube design is
much more than the number of dimension tables. In
TPC-D, 33 attributes need to be considered.
In [9], the notion of a composite lattice is used to integrate
multi-hierarchical dimensions with the lattice of
aggregates in a data cube. Assume that an g
is the set of all attributes on which query can be posted.
Any subset of A can be used as the dimension attributes
to construct a data cube. The composite lattice
(P(A), OE) is the lattice of data cubes constructed from
all the subsets of A. (P(A) is the power set of A.)
The cube associated with the set A is the root of the
lattice L. For two different cubes c 1 , c 2 2 L, the derived
from relationship, c 1 OE c 2 , holds if c 1 can be derived
from c 2 by aggregation. For example the cube
can be derived from c
customer, date]. The lattice L is the search space of
the optimization problem. As has been mentioned, n is
large in TPC-D). The search space L
of the optimization problem is enormous.
2.2 Schema optimization
Given an initial data cube schema Q, a search space L,
and a maintenance cost bound LIM, the schema optimization
problem is defined in Table 1.
Objective: Find C ae L such that Cost(Q; C) is minimal
Constraint: 8q 2 Q; 9c 2 C, such that q OE c
and MC (C) - LIM
Table
1: Schema Optimization Problem
The objective is to find a cube set C such that the
cost of answering the frequent queries, Cost(Q; C) is
the smallest. The constraint states that any frequent
query q can be answered by some cube c in C and that
the total maintenance cost MC (C) of the cube set is
smaller than the system limit LIM. We will discuss various
measures of Cost and MC shortly.
For simplicity, we assume that the frequent queries
are equally probable. Since each cube in the initial
schema Q is derived from one distinct frequent query,
we use the same symbol q to denote both a cube in the
initial schema and its associated frequent query. Since
we do not want to make any assumption on the implementation
of the cubes and the structure of the queries,
a good measure of Cost(Q; C) is the linear cost model
suggested in [9]. In that model, if q OE c, then the
cost of computing the answer for a query q using a
cube c is linearly proportional to the number of data
points in c. We use S(c) to denote the number of data
points in c. For each query q 2 Q, we use FC (q) to denote
the smallest cube in C that answers q. Formally,
FC (q) is a cube in C such that q OE FC (q) and 8x 2
C; if q OE x; then S(FC (q)) - S(x). We now define
Maintaining a data cube requires disk storage and
CPU computation. Without assuming any implementation
method, two measures can be used to estimate
the maintenance cost MC (C) of a cube set.
i.e., the number of cubes in C.
c2C S(c), i.e., the total number of
data points in the cubes. This is an estimate of
the total disk storage required.
2.3 Related works
To the best of our knowledge, this paper is the first to
explore the data cube schema design problem. Several
papers have been published on data cube implementa-
tion. Cube selection algorithms have been proposed in
[9, 14]. These cube selection algorithms assume that
there is one root base cube c 0 which encompasses all
the attributes in the queries. They also assume that
some queries are associated with this root base cube c 0 ;
therefore c 0 2 Q. Very different from the schema optimization
problem, their selections always include c 0 in
the answer, i.e., c 0 2 C. However, in a general DSS such
as TPC-D, we do not anticipate many frequent queries
that involve all the attributes; hence, our cube schema
design problem is more general.
Cube selection algorithms start from a base cube
and determine what cubes deducible from it should be
implemented so that queries on the aggregates in the
base cube can be answered efficiently. Tackling a very
different problem, cube schema design tries to merge
the cubes in an initial schema bottom-up to generate
a set of cubes which provide an optimal query perfor-
mance, while system capacity constraints are satisfied.
The search space of the design problem is in general
much larger because of the large numbers of attributes
present in the initial schema. In short, cube selection algorithms
are for cube implementation but not for cube
schema design.
An interesting question is whether it is possible to
modify the selection algorithm [9] to solve the schema
design problem. One solution is to apply the selection
algorithm on the maximal cubes of Q, i.e., those that
cannot be deduced from any other cube in Q. In gen-
eral, there are more than one maximal cubes in Q. In
order to adopt the selection algorithm, we can include
all the maximal cubes in the answer set C as the initial
members, and then expand C by applying the selection
algorithms on them. The expansion stops when the total
size exceeds the storage bound LIM.
However, the above solution has a few undesirable
problems. First, if the maximal cubes alone have already
exceeded the maintenance bound, then the cube
selection algorithm fails. Second, if some of the maximal
cubes are highly correlated, e.g., with many overlapping
attributes, then merging some of them could be
beneficial and is sometimes even necessary. Selection
algorithms, however, never merge cubes. For example,
given a lattice suppose both
cubes ABCD and BCDE are maximal, and S(ABCD)=
using a selection algo-
rithm, both ABCD and BCDE are selected. How-
ever, replacing them by the cube ABCDE decreases
the maintenance cost without increasing the query cost.
Hence, the selection algorithm is not always applicable
to the cube schema design problem.
2.4 Complexity of the optimization problem
The schema optimization problem is computationally
difficult. Here, we summarize its complexity in the following
theorem. The proof of the theorem can be found
in [4].
Theorem 1 (1) Given an initial schema Q, a search
space L, and a bound LIM, the problem of finding a
subset C ae L, such that C does not contain the root of
L, every q 2 Q can be derived from some c 2 C and
(2) Given a performance ratio r, r ? 1, to find an
algorithm A for the Schema Optimization Problem defined
in Table 1 whose performance is bounded by r
times the optimal performance is NP-hard.
Theorem 1 tells us that the optimization problem is
a very difficult one. In theory, it is impossible to find
even an efficient approximation algorithm that can give
a performance guarantee. In the next section, we will
discuss a greedy approximation algorithm and discuss
the heuristics it uses to prune the vast search space
looking for a "good" solution. The efficiency and the
effectiveness of the algorithm are studied in Section 4.
3 The Algorithm cMP
We have developed a greedy algorithm called cMP (cube
Merging and Pruning). The outline of cMP is shown
in
Figure
2.
such that ff(C; D;A) is maximum;
return C;
Figure
2: The algorithm cMP
During each iteration of the loop (lines 2 to 6) of
algorithm cMP, we select two cube sets D and A. The
cubes in D are removed from C, and the cubes in A are
added into C. The cube sets D and A are selected such
that the cubes in Q can still be answered by the new C.
The algorithm terminates when the maintenance cost
of the cube set no longer exceeds the limit LIM.
The selection of the cube sets D and A is governed
by the evaluation function ff. The evaluation function is
defined such that the reduction in the maintenance cost
is large while the increment in the query cost is small.
In our algorithm, we use the following ff function:
IncreaseInQueryCost
(2)
is the new C. The numerator
of formula 2 is the saving in maintenance cost. (We
have used MC 2 in this formula. The results in the rest
of the paper, unless stated explicitly, are also valid for
.) The denominator is the increment in the query
cost. F C +(t) is the smallest ancestor of t in C
3.1 Properties of the evaluation function ff
In cMP, the search space for D and A is enormous. In
this subsection, we show some properties of the evaluation
function which can be used to prune the search
space effectively.
Theorem 2 Suppose L and ff are defined as above, and
C is the set of cubes when cMP enters an iteration.
Suppose that D s and A s are selected based on C which
maximizes the value of ff over all D ' C and A '
there are more than one combinations that
give the maximum value, D s is the combination with the
fewest cubes.) Then D s and A s must have the following
properties.
1. If jD s
2. If D then the following is
true:
(a) A which is the
smallest common ancestor of b
(b)
According to the above theorem, A is determined
by D in case ff attains its maximum value. Also, A
contains only the smallest common ancestor of the removed
cubes - this significantly reduces the search
space. Furthermore, item 2.b of Theorem 2 makes the
evaluation of many combinations of D unnecessary.
Corollary 1 If D
is not true.
Corollary 1 tells us that we do not need to consider
a D which contains a cube that can be derived from
another cube in D. For such D, the corollary implies
that the ff value is not the maximum. We develop the
procedure SelectCubes which uses this result to prune
candidates in the search space of cMP:
1. Build a directed acyclic graph (DAG) of all the
cubes in C in which the edges are the derived from
relationship OE among the cubes in C.
2. Partition the graph into disjoint paths. We partition
the DAG by traversing the graph from a maximal
node which has no ancestor towards a bottom
node which has no descendant. The visited nodes
(and their associated edges) are removed. We repeat
the same procedure on the remaining nodes
until all nodes are removed.
3. The nodes on the same path have a derived-from
relationship. According to Corollary 1, no two
nodes from the same path should be picked together
for D. Hence, we pick at most one node
from each path. In practice, the number of paths
should not be large. This pruning significantly reduces
the number of possible candidates of D and
hence A from all possible combinations.
Following Theorem 2 and the path-based processing, we
derive another pruning technique for further reduction
of the search space.
Corollary 2 Assume C is partitioned into p paths:
Let a k1 ;k2 ;\Delta\Delta\Delta;k
p. If 9t - m, such that S(a k1
then for any set of cubes
t, ff cannot attain the maximum
value using D.
The corollary suggests that we can organize the Se-
lectCubes procedure starting from the bottom of each
path to compose candidates D from the nodes. When
the select procedure reaches a candidate (combination)
that satisfies the condition of Corollary 2, then those
yet-to-be-evaluated candidates of D "above" the current
combination in the lattice hierarchy can be ig-
nored. We illustrate the pruning process with an example
shown in Figure 3. The cube set C is partitioned
into 3 paths containing 3, 4, and 3 nodes respec-
tively. We select the combination for D from the bottom
nodes of the paths: b 1;1 ; b 2;1 ; b 3;1 . Suppose that when
we evaluate the combination
S(FC (b 1;2 )) is true. According to Corollary 2, all the
remaining combinations for D which include b 1;2 do not
need to be evaluated. These pruned combinations are:
g, and all the 9 combinations of
3 cubes: fb 3.
Eleven combinations are pruned in this case.
a 1,1
a 2,2
2,3
Figure
3: An example of pruning in cMP
/* Input: L:search space; C:a set of cubes; r: size restriction;
Output:D:cubes to be removed; a: a new cube to be added*/
procedure SelectCubes(input: L; C;
partition C into p total paths: path[1];
add path[i] to rbuf ;
call procedure iterate proc(rbuf; res;
remove path[i] from rbuf ;
return result res;
procedure iterate proc(rbuf; res;
rbuf:Paths: all selected paths;
path[i]:curNode:the selected cube on ith path;
rbuf:a: the smallest common ancestor
of the condidate cube combination; */
res: /* the buffer content the result up to this point */
restriction of candidate size */
start:/* the first path not having been selected yet*/
do f
such that S(rbuf:a) ?= S(FC (p:curNode)))
no need for further iteration */
and rbuf:noPaths ! r)
start to p total do f/* add more path to rbuf */
add path[i] to rbuf ;
call procedure iterate proc(rbuf; res;
remove path[i] from rbuf ;
Figure
4: The procedure SelectCubes of rMP
3.2 The rMP algorithm
Even though many combinations can be pruned while
cMP is searching for the optimal ff value, it may still
need to consider a large number of combinations involving
nodes on multiple paths. To reduce the complexity,
one option is to restrict the number of nodes in a candidate
combination. We remark that Theorem 2 still
holds even with the size restriction. We call the search
algorithm rMP when only candidates of size not larger
than a certain constant r (r ? 1) are considered. Our
performance studies show that rMP could be a good
approximation of the unrestricted cMP. Obviously, the
goodness depends on the value of r. When
p is the number of paths in C, rMP becomes cMP. We
list the procedure SelectCubes for rMP in Figure 4.
In the first step of the procedure SelectCubes, C is
partitioned into a number of paths. The loop from line
2 to line 6 evaluates all the possible combinations of
D by traversing all the paths via a recursive procedure
iterate proc. The set of cubes to be removed (D s ) and
the cube to be added (a) which attain the maximum
value of ff are returned at line 7.
The sequence of node traversal is constructed by the
following two iterative loops:
ffl Combinations of paths. It is constructed by the
loop from line 2 to line 6 and the loop from line 22
to line 26 inside the recursive procedure. Figure 3
shows an example. Suppose the size restriction r
is set to 3. The sequence of path combinations considered
by SelectCubes is fP 1
g.
ffl Traverse the nodes on a path. This is performed
at line 16 when the procedure iterate proc
is called, and at line 27 in each iteration of the
loop between line 17 and line 28. In Figure 3, the
first few combinations in the traversal sequence are
\Delta. During the traversal, the result in Corollary 2
is used to prune combinations that cannot attain
the maximum value. That is the evaluation in line
18, and the following two conditions checking.
3.3 The relationship between rMP and 2MP
Although the pruning methods introduced above is very
effective for rMP, its complexity is still high when r is
large. It is thus interesting to see how 2MP performs
comparing with the more general rMP.
Theorem 3 Suppose C is in a state when rMP(r ?
is entering an iteration to identify new sets D and
A. Assume that the maintenance cost function is MC 2 .
If, for all possible D's which cannot be pruned away
by either Corollaries 1 or 2, D satisfies the following
S a
then the (D,A) pair selected by
rMP(r ? 2), and 2MP are identical.
Theorem 3 shows a sufficient condition that rMP(r ?
and 2MP are equivalent. When we try our algorithm
on some real data sets, the results obtained by 2MP
is very close to rMP, even for some large values of r.
The restricted but more efficient rMP algorithm (with
a small r) is thus a viable choice in many occasions.
Performance study
We have carried out a performance study of the algorithms
on a Sun Enterprise 4000 running Solaris 2.6.
Our first goal is to study the "goodness" of the schemas
generated by cMP and 2MP. The second goal is to study
the efficiency of the two algorithms, and their pruning
effectiveness.
We use the TPC-D benchmark data for the study.
The database is generated with a scale factor of 0.5
[15]. The size of the database is about 0.5 GB. All the
queries in the benchmark are frequent queries
in our model.
4.1 Goodness of the schema generated
In the first experiment, we compare Cost(Q; C) of the
outputs, C, from both cMP and 2MP. The results are
also compare with a random selection algorithm (la-
beled 2Rand) that randomly merges pairs of cubes iteratively
until the maintenance cost limit is not exceeded.
We use MC 1 (C), the number of cubes in C, to compute
the maintenance cost for simplicity.1e+072e+073e+074e+075e+0717
2Rand
Figure
5: Query cost of the schemas generated
Figure
5 shows the result. The greph shows that
cMP and 2MP are significantly better than random se-
lection, in particular, when the cost limit is not too
small so that there are more combinations for D for the
algorithm to make a wise pick.
In our experiment, the query costs of cMP and 2MP
are the same except that the curve for cMP has a gap
at at one point has
reduced the schema to 11 cubes. In the next reduction,
3 cubes are selected to be replaced by one cube; hence,
the size of C becomes 9. In contrast, in each step, 2MP
replaces no more than 2 cubes by another.
4.2 Efficiency of cMP
Our second goal is to study the efficiency of cMP. In
particular, we are interested in studying their effectiveness
in pruning the search space of the optimization
problem.
The effect of the first pruning method that the newly
added cubes A can be determined from the combination
D is evident. Therefore, we only consider the other two
pruning methods which are based on Corollaries 1 and
2. We measure the effectiveness by the average pruning
rate, defined as the percentage of the pruned combinations
of D over the number of all possible combinations.
2 The closeness of the two curves from cMP and 2MP is due to the
extremely low correlations between the queries (cubes) in TPC-D.
The results are shown in Figure 9. ?From the figure, we
see that the average pruning rate is higher than 80% in
all the cases. The pruning rate becomes smaller while
the maintenance cost limit decreases. This is because,
with a small limit, cMP is forced to merge cubes that
are at a higher level of the lattice hierarchy (Figure 3).
Hence, the chance of pruning becomes smaller. In fact,
due to the design purpose of the benchmark, the correlations
among the initial set of cubes in TPC-D is quite
low. The high pruning rate in our experiment shows
that cMP is effective even in such a not-so-favorable
situation. In many general applications, we expect that
the frequently asked DSS queries will have high corre-
lation; and the pruning rate of cMP will even be better
than what is shown in our experiment.2060100
Pruning
Figure
Average pruning rate of cMP51525
Response
timee(Sec.)
Figure
7: Response time of cMP and 2MP
Finally, we compared the efficiency of cMP and 2MP
by measuring their response times. Figure 4.2 shows
that 2MP is at least two order of magnitude faster than
cMP. Considering also the effectiveness of 2MP (Fig-
ure 5), our results show that 2MP is an effective and efficient
approximation solution to the data cube schema
design problem.
5 Discussion and Conclusion
The basis of our 2-phase schema design approach is a set
of cubes extracted from the query requirements. How
valid is this approach? We have observed that some
vendors have already been doing something similar. For
example, Microsoft SQL OLAP server allows the users
to optionally log queries submitted to it to fine tune the
set of cubes [10]. From these logs, frequent queries can
be identified and grouped into similar types. It is thus
feasible to identify the cubes in the initial schema from
the frequent queries. Currently, general practitioners
design cube schema in an ad-hoc way, which is very
likely far from optimal. This problem will become very
serious when data cubes are required to be built on
large data warehouses such as those from retail giants or
Internet e-commerce shops, as their databases contain
large numbers of attributes.
We have formulated the second phase of the design
problem as an optimization problem, and have developed
an efficient greedy algorithm to solve it.
Once a data cube schema is defined, the most imminent
problem that follows is query processing. Given
a DSS query submitted to the query client, the query
client module needs to determine whether the query
should be processed at the data cube level or at the
warehouse level. If a query can be answered by the
cubes, one needs to determine which cube should be
used. If multiple solutions exist, one needs to determine
the best choice of a cube.
We have proposed a two-phase approach to deal with
the design problem in a data cube system: (1) an initial
schema is derived from the user's query requirements;
(2) the final schema is derived from the initial schema
through an optimization process. The greedy algorithm
cMP proposed for the optimization is very effective in
pruning the search space of the optimal solution. Variants
of cMP have been studied to reduce the search
cost. Experiments on real data (TPC-D) have been
performed to investigate the behavior of cMP. Results
observed from the performance study confirm that cMP
is an efficient algorithm.
--R
On the computation of multidimensional aggregates.
An Overview of Data Warehousing and OLAP Technology.
http://www.
IBM DB2 OLAP Server
Implementing data cubes efficiently.
Microsoft SQL OLAP Services and PivotTable Ser- vice
Improved Query Performace with Variant Indexes.
Materialized View Selection for Multidimensional Datasets.
Transaction Processing Performance Council.
An array-based algorithm for simultaneous multi-dimensional aggregates
--TR
Multi-table joins through bitmapped join indices
Implementing data cubes efficiently
An overview of data warehousing and OLAP technology
Improved query performance with variant indexes
An array-based algorithm for simultaneous multidimensional aggregates
Data Cube
Materialized View Selection for Multidimensional Datasets
Aggregate-Query Processing in Data Warehousing Environments
On the Computation of Multidimensional Aggregates
--CTR
Tapio Niemi , Jyrki Nummenmaa , Peter Thanisch, Constructing OLAP cubes based on queries, Proceedings of the 4th ACM international workshop on Data warehousing and OLAP, p.9-15, November 09-09, 2001, Atlanta, Georgia, USA
Edward Hung , David W. Cheung , Ben Kao, Optimization in Data Cube System Design, Journal of Intelligent Information Systems, v.23 n.1, p.17-45, July 2004 | data cubes;OLAP;DSS;data cube schema design |
320020 | Updates and view maintenance in soft real-time database systems. | A database system contains base data items which record and model a physical, real world environment. For better decision support, base data items are summarized and correlated to derive views. These base data and views are accessed by application transactions to generate the ultimate actions taken by the system. As the environment changes, updates are applied to the base data, which subsequently trigger view recomputations. There are thus three types of activities: base data update, view recomputation, and transaction execution. In a real-time system, two timing constrains need to be enforced. We require transactions meet their deadlines (transaction timeliness) and read fresh data (data timeliness). In this paper we define the concept of absolute and relative temporal consistency from the perspective of transactions. We address the important issue of transaction scheduling among the three types of activities such that the two timing requirements can be met. We also discuss how a real-time database system should be designed to enforce different levels of temporal consistency. | Introduction
A real-time database system (RTDB) is often employed in a dynamic environment to monitor
the status of real-world objects and to discover the occurrences of "interesting" events [15, 10,
2, 3]. As an example, a program trading application monitors the prices of various stocks,
financial instruments, and currencies, looking for trading opportunities. A typical transaction
might compare the price of German Marks in London to the price in New York and if there
is a significant difference, the system will rapidly perform a trade. The state of a dynamic
environment is often modeled and captured by a set of base data items within the system.
Changes to the environment are represented by updates to the base data. For example, a
financial database refreshes its state of the stock market by receiving a "ticker tape" - a
stream of price quote updates from the stock exchange.
To better support decision making, the large numbers of base data items are often summa-
Application
Transactions
Items
Dynamic
Environment
Monitor
Updates Recomputations
Other
Data Views
Base
Figure
1: A Real Time Database System
rized into views. Some example views in a financial database include composite indices (e.g.,
S&P 500, Dow Jones Industrial Average and sectoral sub-indices), time-series data (e.g., 30-day
moving averages), and theoretical financial option prices, etc. For better performance, these
views are materialized. When a base data item is updated to reflect certain external activity,
the related materialized views need to be updated or recomputed as well.
Besides base item updates and view recomputations, application transactions are executed
to generate the ultimate actions taken by the system. These transactions read the base data
and views to make their decisions. For instance, application transactions may request the
purchase of stock, perform trend analysis, signal alerts, or even trigger the execution of other
transactions. Application transactions may also read other static data, such as a knowledge
base capturing expert rules.
Figure
1 shows the relationships among the various activities in such a real-time database
system. Notice that updates to base data or recomputations for derived data may also be run
as transactions (e.g., with some of the ACID properties). In those cases, we refer to them as
update transactions and recomputation transactions. When we use the term transaction alone,
we are referring to an application transaction.
Application transactions can be associated with one or two types of timing requirements:
transaction timeliness and data timeliness. Transaction timeliness refers to how "fast" the
system responds to a transaction request, while data timeliness refers to how "fresh" the data
read is, or how closely in time the data read by a transaction models the environment. Stale
data is considered less useful due to the dynamic nature of the data.
Satisfying the two timeliness properties poses a major challenge to the design of a scheduling
algorithm for such a database system. This is because the timing requirements pose conflicting
demands on the system resources. To keep the data fresh, updates on base data should be
applied promptly. Also, whenever the value of a base data item changes, affected derived
views have to be recomputed accordingly. The computational load of applying base updates
and performing recomputations can be extremely high, causing critical delays to transactions,
either because there are not enough CPU cycles for them, or because they are delayed waiting
for fresh data. Consequently, application transactions may have a high probability of missing
their deadlines.
In this paper we study the intricate balance in scheduling the three types of activities:
updates, recomputations, and application transactions to satisfy the two timing requirements
of data and transactions. Our goals are:
ffl to define temporal correctness from the perspective of transactions;
ffl to investigate the performance of various transaction scheduling policies in meeting the
two timing requirements of transactions under different correctness criteria;
ffl to address the design issues of an RTDB such that temporal correctness can be enforced.
To make the right decision, application transactions need to read fresh data that faithfully
reflects the current state of the environment. The most desirable situation is that all the
data items read by a transaction are fresh until the transaction commits. This requirement,
however, could be difficult to meet. As a simple example, if a transaction whose execution time
is 1 second requires a data item that is updated once every 0.1 seconds. The transaction will
hold the read lock on the data item for an extensive period of time, during which no new updates
can acquire the write lock and be installed. The data item will be stale throughout most of the
transaction's execution, and the transaction cannot be committed without using outdated data.
A stringent data timing requirement also hurts the chances of meeting transaction deadlines.
Let us consider our simple example again. Suppose the data update interval is changed from 0.1
seconds to 2 seconds. In this scenario, even though it is possible that the transaction completes
without reading stale data, there is a 50% chance that a new update on the data arrives while
the transaction is executing. To insist on a no-stale-read system, the transaction has to be
aborted and restarted. The delay suffered by transactions due to aborts and restarts, and the
subsequent waste of system resources (CPU, data locks) is a serious problem. The definition
of data timeliness thus needs to be relaxed to accommodate those difficult situations (e.g., by
allowing transactions to read slightly outdated data, probably within a predefined tolerance
level). We will discuss a number of options for relaxing the data timing requirement in this
paper.
Given a correctness criterion, we need a suitable transaction scheduling policy to enforce
it. For example, a simple way to ensure data timeliness is to give updates and recomputations
higher priorities over application transactions, and to abort a transaction when it engages in
a data conflict with an update or recomputation. This policy ensures that no transactions
can commit using old data. However, giving application transactions low priorities severely
lower their chances of meeting deadlines. This is especially true when updates (and thus
recomputations) arrive at a high rate. We will investigate how transaction should be scheduled
to balance the contrary requirements of data and transaction timeliness.
The rest of this paper is organized as follows. In Section 2 we discuss some related works. In
Section 3 we discuss the properties of updates, recomputations, and application transactions.
In particular, we will discuss the implications of these properties on the design of a transaction
scheduler and a concurrency controller. Section 4 proposes three temporal correctness criteria.
In Section 5 we list out the options of transaction scheduling and concurrency control that
support the different correctness criteria. In Section 6 we define a simulation model to evaluate
the performance of the scheduling policies. The results are presented in Section 7. We conclude
the paper in Section 8.
Related Works
In [2], the load balancing issues between updates and transactions in a real-time database system
are studied. In the system model, updates come at a very high rate, while transactions must
be committed before their deadlines. The authors propose several heuristics and examine their
effectiveness in maintaining data freshness while not sacrificing transaction timeliness. They
point out that the On-Demand strategy, with which updates are only applied when required by
transactions, gives the best overall performance.
In [3], the balancing problems between derived data (views) 1 updates and transactions
are studied. It is noted that recomputations often come in bursts, obeying the principle of
update locality. The authors propose the Forced Delay approach which delays the triggering of
a recomputation for a short period, so that recomputations on the same view object can be
batched into a single computation. The study shows that batching significantly improves the
performance of the RTDB.
The two studies reported in [2] and [3] are very closely related; The former studies updates
and transactions, while the latter studies recomputation transactions. However, they do not
consider the case when updates, recomputations, and transactions are all present. Also, the
studies report how likely temporal consistency is maintained under different scheduling policies,
but do not discuss how to enforce the consistency constraints. In this paper we consider
various scheduling policies for enforcing temporal consistency in an RTDB in which updates,
recomputations, and transactions co-exist.
In [13], Song and Liu discuss data temporal consistency in a real-time system that executes
periodic tasks. In their model, tasks are either sensor (write-only) transactions, read-only
transactions or update (read-and-write) transactions. Transactions must read temporally consistent
data (absolutely or relatively) in order to deliver correct results. Since multiversion
databases have been shown to offer a significant performance gain over single-version ones, the
authors propose and evaluate two multiversion concurrency control algorithms (lock-based and
optimistic) in their studies.
1 In this paper, we use the terms "views" and "derived items" interchangeably.
In multiversion locking concurrency control, two-phase locking is used to serialize the
read/write operations of update transactions, while timestamps are used to locate the appropriate
versions to be read by read-only transactions. In multiversion optimistic concurrency
control, an update goes through three phases: a read phase, a validation phase, and a possible
phase. During the read phase, a transaction reads and writes the most recent versions of
data in its own workspace without locking the data. When it is ready to commit, the trans-action
enters the validation phase. Any conflicting update transactions found are immediately
aborted and restarted. If a transaction passes its validation phase, it enters the write phase in
which the new version of each object in the transaction's local workspace becomes permanent
in the system. Read-only transactions will read the most recent and committed version of data,
and go through only one phase - the read phase.
The use of multiversion techniques in both algorithms serve the common purpose of eliminating
the conflicts between read-only and update transactions. This is because read-only
transactions can always read the committed versions, without contending resources with write
operations. Hence read-only transactions are never restarted, and the costs of concurrency
control and restart can be significantly reduced.
3 Updates, Recomputations, and Transactions
In this section we take a closer look at some of the properties of updates, recomputations, and
application transactions. We will discuss how these properties affect the design of a real-time
database system. In particular, we discuss the concept of update locality, high fan-in/fan-out
of recomputations, and the timing requirements of transactions. These properties are common
in many real-time database systems such as programmed stock trading.
For many real-time database applications, managing the data input streams and applying
the corresponding database updates represents a non-trivial load to the system. For example,
a financial database for program trading applications needs to keep track of more than three
hundred thousand financial instruments. To handle the U.S. markets alone, the system needs
to process more than 500 updates per second [5]. An update usually affects a single base data
item (plus a number of related views).
The high volume of updates and their special properties (such as write-only or append-
warrant special treatment in an RTDB. In particular, they should not be executed with
full transactional support. If each update is treated as a separate transaction, the number
of transactions will be too large for the system to handle. (Recall that a financial database
may need to process more than 500 updates per second.) Application transactions will also
be adversely affected because of resource conflicts against updates. As is proposed in [3], a
better approach is to apply the update stream using a single update process. Depending on the
scheduling policy employed, the update process installs updates in a specific order. It could be
linear in a first-come-first-served manner, or on-demand upon application transactions' requests.
When a base data item is updated, the views which depend on the base item have to be
updated or recomputed as well. The system load due to view recomputations can be even higher
than that is required to install updates. While an update involves a simple write operation,
recomputing a view may require reading a large number of base data items (high fan-in), 2 and
complex operations 3 . Also, an update can trigger multiple recomputations if the updated base
item is used to derive a number of views (high fan-out).
One way to reduce the load due to updates and recomputations is to avoid useless work.
An update is useful only if the value it writes is read by a transaction. So if updates are done
in-place, an update to a base item b needs not be executed if no transactions request b before
another update on b arrives. Similarly, a recomputation on a view needs not be executed if no
transactions read the view before the view is recomputed again. This savings, however, can
only be realized if successive updates or recomputations on the same data or view occur closely
in time. We call this property update locality [3].
Fortunately, many applications that deal with derived data exhibit such a property. Locality
occurs in two forms: time and space. Updates exhibit time locality if updates on the same item
occur in bursts. Space locality refers to the phenomenon that when a base item b, which
affects a derived item d, is updated, it is very likely that a related set of base items, affecting
d, will be updated soon. For example, changes in a bank's stock price may indicate that a
certain event (such as an interest rate hike) affecting bank stocks has occurred. It is thus likely
that other banks' stock prices will change too. Each of these updates could trigger the same
recomputation, say for the finance sectoral index. An example of update locality found in real
financial data is reported in [3].
Update locality implies that recomputations for derived data occur in bursts. Recomputing
the affected derived data on every single update is probably very wasteful because the same
derived data will be recomputed very soon, often before any application transaction has a
chance to read the derived data for any useful work. Instead of recomputing immediately,
a better strategy is to defer recomputations by a certain amount of time and to batch or
coalesce the same recomputation requests into a single computation. We call this technique
recomputation batching.
Application transactions may read both base data and derived views. One very important
design issue in the RTDB system is whether to guarantee consistency between base data and
the views. To achieve consistency, recomputations for derived data are folded into the triggering
updates. Unfortunately, running updates and recomputations as coupled transactions is not
desirable in a high performance, real-time environment. It makes updates run longer, blocking
other transactions that need to access the same data. Indeed, [4] shows that transaction response
time is much improved when events and actions (in our case updates and recomputations) are
2 For example, the S&P 500 index is derived from a set of 500 stocks; a summary of a stock's price in an
one-hour interval could involve hundreds of data points.
3 For example, computing the theoretical value of a financial option price requires computing some cumulative
distributions.
decoupled into separate transactions. Thus, we assume that recomputations are decoupled from
updates. We will discuss how consistency can be maintained in Section 5.
Besides consistency constraints, application transactions are associated with deadlines. We
assume a firm real-time system. That is, missing a transaction's deadline makes the transaction
useless, but it is not detrimental to the system. In arbitrage trading, for example, it is better not
to commit a tardy transaction, since the short-lived price discrepancies which trigger trading
actions disappear quickly in today's efficient markets. Occasional losses of opportunity are not
catastrophic to the system. The most important performance metric is thus the fraction of
deadlines the RTDBS meets. In Section 5 we will study a number of scheduling policies and in
Section 7 we evaluate their performance on meeting deadlines.
4 Temporal Correctness
One of the requirements in an RTDB system is that transactions read fresh and consistent
data. Temporal Consistency refers to how well the data maintained by the RTDB models the
actual state of the environment [11, 13, 6, 7, 8, 14]. Temporal consistency consists of two
components: absolute consistency (or external consistency) and relative consistency. A data
item is absolutely consistent if it timely reflects the state of an external object that the data
item models. A set of data items are relatively consistent if their values reflect the states of the
external objects at the same time instant.
One option to define absolute consistency (opp staleness) is to compare the current time
with an update's arrival time (a timestamp) which is an indication of which snapshot of the
external object the update is representing. A data item is considered stale if the difference of
its last update's timestamp and the current time is larger than some predefined maximum age
T . (The value T is also called the absolute validity interval.) We call this definition Maximum
Age (MA) [2]. Notice that with MA, even if a data object does not change value, it must still be
periodically updated, or else it will become stale. Thus, MA makes more sense in applications
where data items are continuously changing in time.
Another option is to be optimistic and assume that a data object is always fresh unless
an update has been received by the system but not yet applied to the data. We will refer to
this definition as Unapplied Update (UU). UU is more suitable for discrete data objects which
change at discrete point in time and not continuously [12]. For example, in program trading,
stock prices are updated when trades are made, not periodically. In such a context, age has
less meaning since a price quote could be old but still be correct. UU is more general than MA,
since the arrival times of updates are not assumed known in advance. Figure 2 illustrates the
two staleness models.
If a base data item is updated but its associated views are not recomputed yet, the database
is not relatively consistent. It is clear that an absolutely consistent database must also be rel-
UU:
Update
becomes stale
Committed
becomes stale
New Update Request
Received
New Update
Committed
Item is fresh again
MA:
Update
Committed
Maximum Age
Reached
Time
Time
Item is fresh
Item is fresh
Figure
2: Maximum Age (MA) and Unapplied Update (UU)
atively consistent. However, the converse is not true. For example, a relatively consistent
database that never installs updates remains relatively consistent even though its data are
all stale. An ideal system that performs updates and recomputations instantaneously would
guarantee both absolute and relative consistency. However, as we have argued, to improve per-
formance, updates and recomputations are decoupled, and recomputations are batched. Hence,
a real system is often in a relatively inconsistent state. Fortunately, inconsistent data do no
harm if no transactions read them. Hence, we need to extend the concept of temporal consistency
from the perspective of transactions. Here, we formally define our notion of transaction
temporal consistency. We start with the definition of an ideal system first, based on which
correctness and consistency of real systems are measured.
Definition 1: instantaneous system (IS) An instantaneous system applies base data
updates and performs all necessary recomputations as soon as an update arrives, taking zero
time to do it.
Definition 2: absolute consistent system (ACS) In an absolute consistent system, an
application transaction, with a commit time t and a readset R, is given the values of all the
objects R such that this set of values can be found in an instantaneous system at time t.
The last definition does not state that in an absolute consistent system data can never be
stale or inconsistent. It only states that no transactions can read stale or inconsistent data.
It is clear that transactions are given a lower execution priority comparing with updates and
recomputations. For example, if an update (or the recomputations it triggers) conflicts with a
transaction on certain data item, the transaction has to be aborted. Maintaining an absolute
consistent system may thus compromise transaction timeliness. To have a better chance of
meeting transactions' deadlines, we need to upgrade their priorities. A transaction's priority
can be upgraded in two ways, with respect to its accessibility to data and CPU. For the former,
transactions are not aborted by updates due to data conflicts, while for the latter, transactions
are not always scheduled to execute after updates and recomputations.
start
time
time
time
commit
Figure
3: This figure illustrates the differences between ACS, weak ACS and RCS. Suppose a transaction
reads objects during its execution, with maximum staleness \Delta. Let the j th version
of object In an ACS, the set of objects read by T must be (o 12 , because only this set of values
can be found in an IS at the commit time of T . In a weak ACS, the object versions read can be (o 11 ,
as they can be found in an IS at a time not earlier than the start time of T . In an
RCS, the object versions available to T are (o 11 , as they can be found in an
IS at a time not earlier than t 0
Definition 3: weak absolute consistent system (weak ACS) In a weak absolute consistent
system, an application transaction, with a start time t and a readset R, is given the values
of all the objects R such that this set of values can be found in an instantaneous system at
t.
A weak ACS is very similar to an ACS in that transactions in both systems read relative
consistent data. The major difference is that in a weak ACS, the data that a transaction reads
need only be fresh to the point when the transaction reads them, not when the transaction
commits (as is in an ACS). The implication is that once a transaction successfully read-locks a
set of relatively consistent data, it needs not be aborted by later updates due to data conflicts.
The transaction thus has a better chance of finishing before its deadline.
We can further relax the requirement of data freshness by allowing transactions to read
slightly stale data. Although this is not desirable in respect to the usefulness of the information
read by a transaction, this can improve the probability of meeting transaction deadlines.
Definition 4: relative consistent system (RCS) In a relative consistent system with a
maximum staleness \Delta, an application transaction with a start time t and a readset R is given
the values of all the objects R such that this set of values can be found in an instantaneous
system at time t 1 , and t 1
Essentially, an RCS allows some updates and recomputations to be withheld for the benefit
of expediting transaction execution. Data absolute consistency is compromised but relative
consistency is maintained. Note that we can consider weak ACS as a special case of RCS with
a zero \Delta. Figure 3 illustrates the three correctness criteria, namely, ACS, weak ACS, and RCS.
Transaction Scheduling and Consistency Enforcement
In this section we discuss different policies to schedule updates, recomputations, and application
transactions to meet the different levels of temporal consistency requirements. As we have
argued, data timeliness can best be maintained if updates and recomputations are given higher
priorities than application transactions. We call this scheduling policy URT (for update first,
recomputation second, transaction last). On the other hand, the On-Demand (OD) strategy [2],
with which updates and recomputations are executed upon transactions' requests, can better
protect transaction timeliness. We will therefore focus on these two scheduling policies and
compare their performance under the different temporal consistency requirements. Later on,
we will discuss how URT and OD can be combined into the OD-H policy. In simple terms, OD-
H switches between URT and OD depending on whether application transactions are running
in the system. We will show that OD-H performs better than URT and OD in Section 7. In
these policies, we assume that the relative priorities among application transactions are set
using the traditional earliest-deadline-first priority assignment. We start with a brief reminder
of the characteristics of the three types of activities.
Updates. We assume that updates arrive as a single stream. Under the URT policy, there
is only one update process in the system executing the updates in a FCFS manner. For OD,
there could be multiple update activities running concurrently: one from the arrival of a new
update, and others triggered by application transactions. We distinguish the latters from the
formers by labeling them "On-demand updates" (or OD-updates for short).
Recomputations. When an update arrives, it spawns recomputations. Under URT, we assume
that recomputation batching is employed to reduce the system's workload [3]. With
batching, a triggered recomputation goes to sleep for a short while during which other newly
triggered instances of the same recomputation are ignored. Under OD, recomputations are
only executed upon transactions' requests, and hence batching is not applied. To ensure temporal
consistency, however, a recomputation induced by an update may have to perform some
book-keeping processing, even though the real recomputation process is not executed immedi-
ately. We distinguish the recomputations that are triggered on-demand by transactions from
those book-keeping recomputation activities by labeling them "On-demand recomputations"
(or OD-recoms for short).
Application Transactions. Finally, we assume that application transactions are associated
with firm deadlines. A tardy transaction is useless and thus should be aborted by the system.
Scheduling involves "prioritizing" the three activities with respect to their accesses to the
CPU and data. We assume that data accesses are controlled by a lock manager employing
the HP-2PL protocol (High Priority Two Phase Locking) [1]. Under HP-2PL, a lock holder is
aborted if it conflicts with a lock requester that has a higher priority than the holder. CPU
scheduling is more complicated due to the various batching/on-demand policies employed. We
now discuss the scheduling procedure for each activity under four scenarios. These scenarios
correspond to the use of the URT/OD policy in an ACS/RCS. (We consider a WACS as a
special case of an RCS and hence do not explicitly discuss it in this section.)
5.1 Policies for ensuring absolute consistency
As defined in last section, an AC system requires that all items read by a transaction be fresh
and relatively consistent up to the transaction's commit time. It is the toughest consistency
requirement for data timeliness.
5.1.1 URT
Ensuring absolute consistency under URT represents the simpliest case among the four sce-
narios. Since the update process and recomputations have higher priorities than application
transactions, in general, no transactions can be executed unless all outstanding updates and recomputations
are done. The only exception occurs when a recomputation is forced-delayed (for
batching). In this case the view to be updated by the recomputation is temporarily outdated.
To ensure that no transactions read the outdated view, the recomputation should issue a write
lock on the view once it is spawned, before it goes to sleep. Since transactions are given the
lowest priorities, an HP-2PL lock manager is sufficient to ensure that a transaction is restarted
(and thus cannot commit) if any data item (base data or view) in the transaction's read set is
invalidated by the arrival of a new update or recomputation.
5.1.2 OD
The idea of On-Demand is to defer most of the work on updates and recomputations so that
application transactions get a bigger share of the CPU cycles. To implement OD, the system
needs an On-Demand Manager (ODM) to keep track of the unapplied updates and recom-
putations. Conceptually, the ODM maintains a set of data items x (base or view) for which
unapplied updates or recomputations exist (we call this set the unapplied set). For each such
x, the ODM associates with it the unapplied update/recomputation, and an OD bit signifying
whether an OD-update/OD-recom on x is currently executing. There are five types of activities
in an OD system, namely, update arrival, recomputation arrival, OD-update, OD-recom, and
application transaction. We list the procedure for handling each type of event as follows:
ffl On an update or recomputation arrival. Newly arrived updates and recomputations have
the highest priorities in the system. 4 An update/recomputation P on a base/view item
x is first sent to the OD Manager. The ODM checks if x is in the unapplied set. If
not, x is added to the set with P associated with it, and a write lock on x is requested 5 ;
Newly arrived updates and recomputations are handled in a FCFS manner.
5 The write lock is set to ensure AC, since any running transaction that has read (an outdated) x will be
restarted due to lock conflict.
Otherwise, the OD bit is checked. If the OD bit is "off ", the ODM simply associates P
with x (essentially replacing the old unapplied update/recomputation by P ); If the OD
bit is "on", it means that an OD-update/OD-recom on x is currently executing. The
OD Manager aborts the running OD-update/OD-recom and releases P for execution. In
the case of an update arrival, any view that is based on x will have its corresponding
recomputation spawned as a new arrival.
ffl On an application transaction read request. Before a transaction reads a data item x, the
read request is first sent to the OD Manager. The ODM checks if x is in the unapplied
set. If so, and if the OD bit is "on" (i.e., there is an OD-update/OD-recom being run),
the transaction waits; otherwise, the ODM sets the OD bit "on" and releases the OD-
update/OD-recom associated with x. The OD-update/OD-recom inherits the priority of
the reading transaction.
ffl On the release of an OD-update/OD-recom. An OD-update/OD-recom executes as a
usual update or recomputation transaction. When it finishes, however, the OD Manager
is notified to remove the updated item from the unapplied set.
5.2 Policies for ensuring relative consistency
The major difficulty in an ACS is that an application transaction is easily restarted if some
update/recomputation conflicts with the transaction. An RCS ameliorates this difficulty by
allowing transactions read slightly outdated (but relatively consistent) data. An RCS is thus
meaningful only if it can maintain multiple versions of a data item; each version records the
data value that is valid within a window of time (its validity interval).
For notational convenience, we use a numeric subscript to enumerate the versions of a data
item. For example, x i represents the i th version of the data item x. We define the validity
interval of an item version x i by VI stand for
the lower time bound and the upper time bound of the validity interval respectively. Given a
set of item versions D, we define the validity interval of D as VI
Dg. That
is, the set of values in D is valid throughout the entire interval VI (D). Also, we denote the
arrival time of an update u by ts(u). Finally, for a recomputation or an application transaction
T , we define its validity interval VI (T ) as the time interval such that all values read by T must
be valid within VI (T ).
Our RCS needs a Version Manager (VM) to handle the multiple versions of data items. The
function of the Version Manager is twofold. First, it retrieves, given an item x and a validity
interval I , a value of a version of x that is valid within I . Note that if there are multiple updates
on x during the interval I , the Version Manager would have a choice of a valid version. We
defer our discussion on this version selection issue later. Second, the VM keeps track of the
validity intervals of transactions and the data versions they read. The VM is responsible for
changing a transaction's validity interval if the validity interval of a data version read by the
transaction changes. We will discuss the VI management shortly. Finally, we note that since
every write on a base item or a view generates a new version, no locks need to be set on item
accesses. We will discuss how the "very-old" versions are pruned away to keep the multi-version
database small at the end of this section.
5.2.1 URT
Similar to an ACS, there are three types of activities under URT in an RCS:
ffl On an update arrival. As mentioned, each version of a data item in an RCS is associated
with a validity interval. When an update u on a data item version x i arrives, the validity
interval VI set to [ts(u); 1]. Also, the UTB of the previous version x i\Gamma1 is set to
ts(u), signifying that the previous version is only valid till the arrival time of the new
update. The Version Manager checks and sees if there is any running transaction T that
has read the version x i\Gamma1 . If so, it sets UTB(VI (T
ffl On a recomputation arrival. If an update u spawns a recomputation r on a view item
whose latest version is v j , the system first sets the UTB of v j to ts(u). That is, the
version v j is no longer valid from ts(u) onward. Similar to the case of an update arrival,
the VM updates the validity interval of any running transaction that has read v j . With
batching, the recomputation r is put to sleep, during which all other recomputations on
are ignored. A new version v j+1 is not computed until r wakes up. During execution, r
will use the newest versions of the data in its read set. The validity interval of r (VI (r))
and that of the new view version (VI (v j +1 are both equal to the intersection of all the
validity intervals of the data items read by r.
ffl Running an application transaction. Given a transaction T whose start time is ts(T ), we
first set its validity interval to [ts(T reads a data item x, it consults the
Version Manager. The VM would select a version x i for T such that VI
That is, the version x i is relatively consistent with the other data already read by T .
updated to VI If the VM cannot find a consistent version
(i.e., VI aborted. Note that the wider VI (T ) is, the more
likely that the VM is able to find a version of x that is consistent with what T has already
read. Hence, in our study, we always pick the version x i whose validity interval has the
biggest overlapping with that of T .
5.2.2 OD
Applying on-demand in an RCS requires both an OD Manager and a Version Manager. The
ODM and the VM serve similar purposes as described previously, with the following modifications
6 Recall that \Delta is the maximum staleness tolerable with reference to a transaction's start time.
multiple versions of data are maintained, the OD Manager keeps, for each base item
x in the unapplied set, a list of unapplied updates of x.
ffl In an ACS (single version database), an unapplied recomputation to a view item v is
recorded in the ODM so that a transaction that reads v knows that the current database
version of v is invalid. However, in an RCS (multi-version database), the validity intervals
of data items already serve the purpose of identifying the right version. If no such version
can be found in the database, the system knows that an OD-recom has to be triggered.
Therefore, the ODM in an RCS does not maintain unapplied recomputations.
ffl In an ACS, an OD bit of a data item x is set if there is an OD-update/OD-recom currently
executing to update x. The OD bit is used so that a new update/recomputation arrival
will immediately abort the (useless) OD-update/OD-recom. In an RCS, since multiple
versions of data are kept, it is not necessary to abort the (old but useful) OD-update/OD-
recom. Hence, the OD bits are not used.
different versions of a data item can appear in the database as well as in the
unapplied list, the Version Manager needs to communicate with the OD Manager to
retrieve a right version either from the database or by triggering an appropriate OD-
update from the unapplied lists.
Here, we summarize the key procedures for handling the various activities in an OD-RCS
system.
ffl On an update arrival. Newly arrived updates have the highest priorities in the system
and are handled FCFS. An update u on a base item x is sent to the OD Manager. Each
unapplied update is associated with a validity interval. The validity interval of u is set
to [ts(u); 1]. If there is a previous unapplied update u 0 on x in the ODM, the UTB of
set to ts(u); otherwise the latest version of x in the database will have its UTB
set to ts(u). Similarly, for any view item v that depends on x, if its latest version in the
database has an open UTB (i.e., 1), The UTB will be updated to ts(u). The changes to
the data items' UTBs may induce changes to some transactions' validity intervals. The
Version Manger is again responsible for updating the transactions' VIs.
ffl Running an application transaction. A transaction T with a start time ts(T ) has its
validity interval initialized to [ts(T reads a base item x, The VM would
select a version x i for T that is valid within VI (T ). If such a version is unapplied, an
OD-update is triggered by the OD Manager. The OD-update inherits the priority of T . If
T reads a view item v, The VM would select a version v j for T that is valid within VI (T ).
If no such version in the database is found, an OD-recom r to compute v is triggered.
This OD-recom inherits the priority and the validity interval of T , and is processed by
the system in the same way as for an application transaction.
5.2.3 Pruning the multi-version database
Our RC system requires a multi-version database and an OD Manager that keeps multiple
versions of updates in the unapplied lists. We remark that it is not necessary that the system
keeps the full history on-line. One way to prune away old versions is to maintain a Virtual
Clock (VC) of the system. We define VC to be the minimum of the start times of all running
transactions minus \Delta. Any versions (be they in the database or in the unapplied lists) whose
UTBs are smaller than the virtual clock can be pruned. This is because these versions are not
valid with respect to any transaction's validity interval and thus will never be chosen by the
Version Manager. The virtual clock is updated only on the release or commit of an application
transaction.
5.2.4 A Hybrid Approach
In OD, updates and recomputations are performed only upon transactions' requests. If the
transaction load is low, few OD-updates and OD-recoms are executed. Most of the database
is thus stale. Consequently, an application transaction may have to materialize quite a number
of items it intends to read on-demand. This may cause severe delay to the transaction's
execution and thus a missed deadline. A simple modification to OD is to execute updates and
recomputations while the system is idling, in a way similar to URT, and switch to OD when
transactions arrive. We call this hybrid strategy OD-H.
6 Simulation
To study the performance of the scheduling policies, we simulate an RTDB system with the
characteristics described in Sections 1, 3 and 5. This section describes the specifics of our
simulation model.
Before we proceed to discuss the details of the model, we would like to remark that the
purpose of the simulation experiments is not to study the performance of a specific RTDB
system when it uses URT or On-Demand. Instead, they are aimed to identify the performance
characteristics of the scheduling policies in meeting the different temporal consistency require-
ments. In practice, an RTDB system can be very complex. Application transactions generated
from the users can be extremely varied, ranging from ones with short computation to ones that
have thousands of operations; Recomputations can be simple aggregate functions or ones that
require complex computational analyses. If we model all this complexity, our results will be
obscured by many intricate factors which impair our understanding of the basic tradeoffs of
the scheduling policies. Instead, we chose a relatively simple model that captures the essential
features of the scheduling problem, so that the observations made are more comprehensible.
In our simulation model, we implemented all the necessary components as described in
Section 5. These include a HP-2PL lock manager, an update installer, a disk manager, a
buffer manager, an OD manager (for the On-Demand policy), a version manager (for RCS),
and a transaction manager (which handles priority assignment, transaction aborts and restarts,
recomputation batching, and transaction scheduling). We simulate a disk-based database with
base items and N d derived items (views). The number of views that a base item derives (i.e.,
fan-out) is uniformly distributed in the range [F Each derived item is derived from
a random set of base items. If the average values of fan-out and fan-in are F o and F i respectively,
we have
We assume the system caches its database accesses with a cache hit rate p cache hit .
Updates are generated as a stream of update bursts. Burst arrivals are modeled as Poisson
processes with an arrival rate - u . Each burst consists of burst size updates. The value burst size
is picked uniformly from the range [BS min ,BS max ]. To model locality, each update would have
a probability of p sim of triggering the same set of recomputations as those triggered by the
previous update. Under the URT policy, recomputations are batched. A recomputation is
delayed t FD seconds before execution, during which all instances of the same recomputation
are ignored. Application transactions are generated as another stream of Poisson processes
with an arrival rate - t . A transaction consists of a number of read/write operations. Each
database object has an equal probability of being accessed by an operation. Each transaction
performs N op database operations. Each transaction T is associated with a deadline given by
the following formula:
where ex(T ) is the expected execution time of the transaction 7 , ar(T ) is the arrival time of
T , and slack is the slack factor. In the simulation, slack is uniformly chosen from the range
[S min ,S max ].
The values of the simulation parameters were chosen as reasonable values for a typical
financial application. Where possible, we have performed sensitivity analysis of key parameter
values. The simulator is written in CSIM Each simulation run (generating one data
processed 10,000 update bursts. Table 1 shows the parameter settings of our baseline
experiment. 8
7 Calculated by multiplying the number of operations by the amount of I/O and CPU time taken by each
operation.
8 We chose a relatively small database (3,000 base items) to model "hot items". That is, those data items
that are frequently updated and those that cause recomputations. In practice, the database would have many
other "cold items" as well: those that get updated occasionally and do not trigger recomputations. We have
done experiments modeling "cold items". Since the results show similar conclusion as our simple model, we do
not explicitly model "cold items" in this paper.
We assume a high-end disk, such as Seagate ST39103LC.
includes the time to perform data locking, memory accesses, CPU computation.
We assume transactions perform complex data analysis such as those performed in a financial expert system.
Description Parameter Value
Update burst arrival rate (/sec) - u 1.2
Burst size [BSmin ,BSmax
Forced delay time (sec) t FD 1.0
Update similarity p sim 0.8
Transaction arrival rate (/sec) - t 2.0
# of operations per transaction N op 50
Slack factor [S min ,S
Number of base items N b 3000
Number of derived items N d 300
Fan-out [F
Disk access time (ms) t IO 5.0
time per operation (ms) t CPU 1.0
I/O cache hit rate p cache hit 0.7
maximum staleness (sec) \Delta 10.0
Table
1: Baseline settings
7 Results
In this section we present selected results obtained from our simulation experiments. We compare
the performance of the various scheduling policies in an ACS and an RCS based on how
well they can meet transaction deadlines.
To aid our discussion, we use the notation MD B
A to represent the fraction of missed deadlines
(or miss rate) of scheduling policy A when applied to a B system. For example, MD AC
means that 10% of the transactions miss their deadlines when OD is used in an ACS. Also, in
the graphs presented below, we consistently use solid lines for ACS and dotted lines for RCS.
The three scheduling policies (URT, OD, and OD-H) are associated with different line-point
symbols.
7.1 Absolute Consistent System
Effect of transaction arrival rate In our first experiment, we vary the transaction arrival
rate (- t ) from 0.5 to 5 and compare the performance of the three scheduling policies (URT, OD,
and OD-H) in an absolute consistent system. Figure 4 shows the result. From the figure, we
see that, for a large range of - t (- t ? 1:0), URT performs the worst among the three, missing
14% to 26% of the deadlines. Three major factors account for URT's high miss rate.
First, since transactions have the lowest priorities, their executions are often blocked by
updates and recomputations (in terms of both CPU and data accesses). This causes severe
delays and thus high miss rates to transactions. We call this factor Low Priority. Second,
under URT with recomputation batching, a recomputation is not immediately executed on
arrival. It is forced to sleep for a short while during which it holds a write lock on the derived
item (say, v) it updates. If a transaction requests item v, it will experience an extended delay
Transaction
Miss
Rate
Arrival Rate of Application Transactions
URT/ACS
OD/ACS
OD-H/ACS
Figure
4: Miss rate vs - t (ACS)515250.2 0.4 0.6 0.8 1 1.2 1.4 1.6
Transaction
Miss
Rate
Arrival Rate of Update Bursts
URT/ACS
OD/ACS
OD-H/ACS
Figure
5: Miss rate vs - u (ACS)
blocked by the sleeping recomputation. We call this factor Batching Wait. Third, in an
ACS, a transaction is restarted by an update or a recomputation whenever a data item that the
transaction has read gets a new value. A restarted transaction loses some of its slack and risks
missing its deadline. Similarly, a recomputation can be restarted by an update if they engage
in a data conflict. Restarting recomputations means adding extra high priority workload to the
system under URT. This intensifies the Low Priority factor which causes missed deadlines. We
call this restart factor Transaction Restart. 9 From our experiment result, we observe that
the average restart rate of transactions due to lock conflicts is about 2% to 3%, while that of
recomputations is about 0.5%. We remark that even though the restart rate of recomputations
is not too high, its effect could be significant, since recomputations are in general numerous
and long.
By using the On-Demand approach, transactions are given its fair share of CPU cycles and
disk services. Hence, OD effectively eliminates the Low Priority factor. Also, recomputations
are executed on-demand, hence Batching Wait does not exist. This results in a smaller miss
rate. In our baseline experiment (Figure 4), we see that MD AC
OD is smaller than MD AC
URT for
1:0. The improvement (about 5% for large - t ) is good but is lower than expected. After
all, we just argued that OD removes two of the three adverse factors of URT. Moreover, it is
interesting to see that when the transaction arrival rate is small (- t ! 1:0), reducing transaction
workload (i.e., reducing - t ) actually increases MD AC
OD .
The reason for the anomaly and the lower-than-expected improvement is that under the
pure OD policy, updates and recomputations are executed only on transaction requests. Hence,
when - t is small, the total number of on-demand requests are small. Many database items
are therefore stale. When a transaction executes, quite a few items that it reads are outdated
and thus OD-updates/OD-recoms are triggered. The transaction is blocked waiting for the
on-demand requests to finish. This causes a long response time and thus a high miss rate. As
evidence, Figures 6 and 7 show the numbers of OD-updates and OD-recoms per transaction
respectively. We see that as many as 12 updates and 3.5 recomputations are triggered by (and
9 "Transaction and Recomputation Restart" would be a more precise term. However, we use the shorter form
to save space.
No.
of
OD
Updates
per
Transaction
Arrival Rate of Application Transactions
OD/ACS
OD-H/ACS
Figure
Number of OD-updates per trans-action
No.
of
OD
Recomputations
per
Transaction
Arrival Rate of Application Transactions
OD/ACS
OD-H/ACS
Figure
7: Number of OD-recoms per transaction
blocking) an average transaction under the OD policy. We call this adverse factor OD Wait.
In order to improve OD's performance, the database should be kept fresh so that few on-demand
requests are issued. One simple approach is to apply updates and recomputations
(as in URT) when no transactions are present. When a transaction arrives, however, all up-
dates/recomputations are suspended, and the system reverts to on-demand. We call this policy
OD-H. OD-H can thus be considered a hybrid of OD and URT. Figure 4 shows that OD-H
greatly improves the performance of OD. In particular, the anomaly of a higher miss rate at
a lower transaction arrival rate exhibited in OD vanishes in OD-H. The improvement is attributable
to a very small number of on-demand requests (Figures 6 and 7). The effect of OD
Wait is thus relatively mild. The problem of Transaction Restart, however, still exists when
OD-H is applied to an ACS.
Effect of update arrival rate In another experiment, we vary the update arrival rate
(-
Figure
5 shows the result. We see that a larger - u causes more missed deadlines under all
the scheduling policies. More updates implies a higher update load and more recomputations.
This directly intensifies the effects of Low Priority, Batching Wait, and Transaction Restart.
Also, a higher update rate causes data items to become stale faster. This worsen the effect of
OD Wait. Hence, all policies suffer. Among the three, MD AC
URT increases most rapidly with - u ,
since it is affected by three factors. On the contrary, OD-H suffers the least, since it is mainly
affected by Transaction Restart only.
Effect of slack Our next experiment tests the sensitivity of the three policies against
transaction slack. Figure 8 shows the miss rates versus the maximum slack S max . From the
figure we see that when slack is tight (e.g., S
OD rises sharply as Smax decreases.
Recall that OD suffers when a transaction runs into stale data, in which case the transaction
has to wait for some OD requests to finish (OD Wait). It is thus important that a transaction
be given enough slack for it to live through the wait. In other words, OD is very sensitive to
the amount of slack transactions have. In order to improve OD's performance, again, the key
is to keep the database as fresh as possible (e.g., by OD-H). From Figure 8 we see that OD-H
maintains a very small miss rate, and is relatively unrattled even under a small slack situation.
Transaction
Miss
Rate
Maximum Slack
URT/ACS
OD/ACS
OD-H/ACS
Figure
8: Miss Rate vs S max (ACS)
7.2 Relative Consistent System
Our previous discussion illustrates that in an ACS, URT suffers from three adverse factors,
namely Low Priority, Batching Wait, and Transaction Restart. These three factors lead to a
high MD AC
URT . By switching from URT to OD, we eliminate Low Priority and Batching Wait,
but introduce OD Wait. We then show that the hybrid approach, OD-H, can greatly reduce
the effect of OD Wait (see Figures 6 and 7). Hence, the only culprit left to tackle is Transaction
Restart.
As mentioned in Section 5.2, an RCS uses a multi-version database. Each update or re-computation
creates a new data item version, and thus does not cause any write-read conflicts
with transactions. A transaction therefore never gets restarted because of data conflict with
updates/recomputations. The only cases of transaction abort due to data accesses occur under
URT, when the version manager could not find a materialized data version that is consistent
with the VI of a transaction that is requesting an item. From our experiment, we observe that
the chances of such aborts are very small, e.g., only about 0.1% of transactions are aborted
in our baseline experiment under URT. The on-demand strategies would not perform such
aborts, since any data version can be materialized on-demand. As a result, an RCS effectively
eliminates the problem of Transaction Restart.
Figure
9 shows the miss rates of the three scheduling policies in an RCS (dotted lines). For
comparison, the miss rates in an ACS (solid lines) are also shown. Figure 10 magnifies the part
containing the curves for MD AC
OD-H and MD RC
OD-H for clarity.
From the figures, we see that fewer deadlines are missed in an RCS than in an ACS across
the board. This is because the problem of Transaction Restart is eliminated in an RCS. Among
the three policies, URT registers the biggest improvement. This is because a transaction that
reads a derived item can choose an old, but materialized version. It thus never has to wait for
any sleeping recomputation to wake up and to calculate a new version of the item. Batching
Wait therefore does not exist in an RCS. Hence, two of the three detrimental factors that plague
URT are gone, leading to a much smaller miss rate.
Transaction
Miss
Rate
Arrival Rate of Application Transactions
URT/ACS
OD/ACS
OD-H/ACS
URT/RCS
OD/RCS
OD-H/RCS
Figure
9: Miss rate vs - t (ACS & RCS)13570
Transaction
Miss
Rate
Arrival Rate of Application Transactions
OD-H/ACS
OD-H/RCS
Figure
10: Miss rate vs - t (MD AC
OD-H and
MD RC
No.
of
OD
Updates
per
Transaction
Arrival Rate of Application Transactions
OD/RCS
OD-H/RCS
Figure
11: Number of OD-updates per trans-action
No.
of
OD
Recomputations
per
Transaction
Arrival Rate of Application Transactions
OD/RCS
OD-H/RCS
Figure
12: Number of OD-recoms per trans-action
(RCS)
For OD, we see that the improvement achieved by an RCS is not as big as in the case of
URT. This is because, although Transaction Restart is eliminated, the problem of OD Wait is
not fixed. Figures 11 and 12 show the numbers of OD-updates and OD-recoms per transaction
respectively in an RCS. If we compare the curves in Figures 11 and 12 with those in Figures 6
and 7, we see that, under OD, an average transaction triggers more or less the same number
of OD requests in the two systems. Recall that a transaction would issue an OD request if it
attempts to read a not-yet-materialized data item. In an ACS, each item has only one (the
latest) version. A transaction is forced to issue an OD request if the latest version is not yet
updated. On the other hand, in an RCS, each item has multiple versions. A transaction can
avoid issuing an OD request if it can find a materialized version within the transaction's validity
interval. So in theory, fewer OD requests are issued in an RCS than in an ACS. Unfortunately,
the pure OD policy does not actively perform updates and recomputations. Hence, few of the
data versions are materialized before transactions read them. The effect of OD Wait, therefore,
does not get improved. As we have discussed, the effect of OD Wait is the strongest when
transactions are scarce. From Figure 9, we see that MD RC
OD is much higher than MD RC
URT when
- t is small.
In last sub-section, we explained how OD-H reduces transaction miss rate by avoiding three
of the four adverse factors faced by URT and OD. Figure 10 shows that the performance of
OD-H can be further improved in an RCS by eliminating Transaction Restart. Essentially,
by applying OD-H to an RCS, the system is rid of any of the adverse factors we discussed.
MD RC
OD-H is close to 0 except when - t is big. When the transaction arrival rate is high, missed
deadlines are caused mainly by CPU and disk queueing delays. From Figure 10 we see that
the improvement of MD RC
OD-H over MD AC
OD-H is very significant. For example, when -
about half of the deadlines missed in an ACS are salvaged in an RCS. The percentage of saved
deadlines by an RCS is even more marked when - t is small.
Conclusions
In this paper we defined temporal consistency from the perspective of transactions. In an
absolute consistent system, a transaction cannot commit if some data it reads become stale at
the transaction's commit time. We showed that this consistency constraint is very strict. It
often results in high transaction miss rate. If transactions are allowed to read slightly stale
data, however, the system's performance can be greatly improved through the use of a multi-version
database. We defined a relative consistent system as one with which a transaction reads
relatively consistent data items and that those items are not more than a certain threshold (\Delta)
older than the transaction's start time. We argued that a relative consistent system has a higher
potential of meeting transaction deadlines.
We studied three scheduling policies: URT, OD, and OD-H in a system where three types
of activities: updates, recomputations, and application transactions are present. We discussed
how the policies are implemented in a real-time database system to ensure absolute consistency
or relative consistency. We showed that an ACS using URT is the easiest to implement. An HP-2PL
lock manager and a simple static priority-driven scheduler suffice. This system, however,
could have a very high transaction miss rate. To improve performance, two techniques were
considered. One is to perform updates and recomputations on-demand, and the other is to relax
the temporal consistency constraint from absolute to relative. Implementing these techniques
add complexities to the implementation, though. For example, an on-demand manager is needed
for OD; a version manager is needed for an RCS. We showed the pure On-Demand strategy
does not perform well in a system where transactions arrive at a low rate and have very tight
deadlines. To improve the pure OD policy, a third technique of combining the benefit of URT
and OD was studied. The resulting scheduling policy, OD-H, is shown to perform much better
than the others.
We carried out an extensive simulation study on the performance of the three scheduling
policies, under both an ACS and an RCS. We identified four major factors that adversely affect
the performance of the policies. These factors are Low Priority, Batching Wait, Transaction
Restart, and OD Wait. Different policies coupled with different consistency systems suffer from
different combinations of the factors. Table 2 summarizes our result. From the performance
ACS RCS
URT OD OD-H URT OD OD-H
Low Priority \Theta \Theta
Batching Wait \Theta
Transaction Restart \Theta \Theta \Theta
OD Wait \Theta \Theta
Table
2: Factors that cause missed deadlines
study, we showed that OD-H when applied to an RCS results in the smallest miss rate.
--R
Scheduling real-time transactions: a performance eval- uation
Applying update streams in a soft real-time database system
Database support for efficiently maintaining derived data.
On transaction boundaries in active databases: A performance perspective.
A multidatabase system for tracking and retrieval of financial data.
Predictability and consistency in real-time database systems
SSP: A semantics-based protocol for real-time data access
A survey.
Logical modeling of temporal data.
Maintaining temporal consistency: Pessimistic vs. optimistic concurrency control.
Scheduling transactions with temporal constraints: Exploiting data semantics.
On
--TR
Real-time databases
Real-time databases
Predictability and consistency in real-time database systems
Applying update streams in a soft real-time database system
Temporal and Real-Time Databases
Maintaining Temporal Consistency
Database Support for Efficiently Maintaining Derived Data
Scheduling Real-time Transactions
A Multidatabase System for Tracking and Retrieval of Financial Data
Scheduling transactions with temporal constraints
--CTR
Kam-Yiu Lam , Guo Hui Li , Tei-Wei Kuo, A multi-version data model for executing real-time transactions in a mobile environment, Proceedings of the 2nd ACM international workshop on Data engineering for wireless and mobile access, p.90-97, May 2001, Santa Barbara, California, United States
Ben Kao , Kam-Yiu Lam , Brad Adelberg , Reynold Cheng , Tony Lee, Maintaining Temporal Consistency of Discrete Objects in Soft Real-Time Database Systems, IEEE Transactions on Computers, v.52 n.3, p.373-389, March
Kam-Yiu Lam , Tei-Wei Kuo , Ben Kao , Tony S. H. Lee , Reynold Cheng, Evaluation of concurrency control strategies for mixed soft real-time database systems, Information Systems, v.27 n.2, p.123-149, April 2002 | real-time database;temporal consistency;transaction scheduling;updates;view maintenance |
320215 | Randomized fully dynamic graph algorithms with polylogarithmic time per operation. | This paper solves a longstanding open problem in fully dynamic algorithms: We present the first fully dynamic algorithms that maintain connectivity, bipartiteness, and approximate minimum spanning trees in polylogarithmic time per edge insertion or deletion. The algorithms are designed using a new dynamic technique that combines a novel graph decomposition with randomization. They are Las-Vegas type randomized algorithms which use simple data structures and have a small constant factor.Let n denote the number of nodes in the graph. For a sequence of &OHgr;(m0) operations, where m0 is the number of edges in the initial graph, the expected time for p updates is O(p log3 n) (througout the paper the logarithms are based 2) for connectivity and bipartiteness. The worst-case time for one query is O(log n/log log n). For the k-edge witness problem (Does the removal of k given edges disconnect the graph?) the expected time for p updates is O(p log3 n) and the expected time for q queries is O(qk log3 n). Given a graph with k different weights, the minimum spanning tree can be maintained during a sequence of p updates in expected time O(pk log3 n). This implies an algorithm to maintain a 1 + &egr;-approximation of the minimum spanning tree in expected time O((p log3 n logU)/&egr;) for p updates, where the weights of the edges are between 1 and U. | Introduction
In many areas of computer science, graph algorithms play an important role: Problems modeled
by graphs are solved by computing a property of the graph. If the underlying problem instance
changes incrementally, algorithms are needed that quickly compute the property in the modified
graph. Algorithms that make use of previous solutions and, thus, solve the problem faster than
recomputation from scratch are called fully dynamic graph algorithms. To be precise, a fully
dynamic graph algorithm is a data structure that supports the following three operations: (1)
insert an edge e, (2) delete an edge e, and (3) test if the graph fulfills a certain property, e. g. are
two given vertices connected.
Department of Computer Science, Cornell University, Ithaca, NY. Email: mhr@cs.cornell.edu. Author's Maiden
Name: Monika H. Rauch. This research was supported by an NSF CAREER Award.
y Department of Computer Science, University of Victoria, Victoria, BC. Email: val@csr.uvic.ca. This research
was supported by an NSERC Grant.
1 Throughout the paper the logarithms are base 2.
Previous Work. In recent years a lot of work has been done in fully dynamic algorithms (see [1, 3,
4, 6, 7, 8, 10, 11, 13, 16, 17, 19] for connectivity-related work in undirected graphs). There is also a
large body of work for restricted classes of graphs and for insertions-only algorithms. Currently the
best time bounds for fully dynamic algorithms in undirected n-node graphs are: O(
n) per update
for a minimum spanning forest [3]; O(
n) per update and O(1) per query for connectivity [3];
O(
log n) per update and O(log 2 n) per query for cycle-equivalence ("Does the removal of the
given 2 edges disconnect the graph?") [11]; O(
n) per update and O(1) per query for bipartiteness
("Is the graph bipartite?") [3].
There is a lower bound in the cell probe model
of\Omega\Gamma/32 n= log log n) on the amortized time per
operation for all these problems which applies to randomized algorithms [9, 11]. In [1] it is shown
that the average update time of (a variant of) the above connectivity and bipartiteness algorithms
is O(n=
m+ log n) if the edges used in updates are chosen uniformly from a given edge set. Thus,
for dense graphs their average performance nearly matches the lower bound.
In planar graphs fully dynamic algorithms for minimum spanning forest and connectivity are
given in [5] that are close to the lower bound: they take time O(log 2 n) per deletion and O(log n)
per insertions and query. However, the constant factor of these algorithms is quite large [5]. Thus,
the following questions were posed as open questions in [4, 5]:
(1) Can the above properties be maintained dynamically in polylogarithmic time in (general)
(2) Is the constant factor in the fully dynamic algorithms small such that an efficient implementation
is possible?
New Results. This paper gives a positive answer to both questions. It presents a new technique
for designing fully dynamic algorithms with polylogarithmic time per operation and applies this
technique to the fully dynamic connectivity, bipartiteness, 1+ffl-approximate minimum spanning
trees, and cycle-equivalence problem. The resulting algorithms are Las-Vegas type randomized
algorithms which use simple data structures and have a small constant factor.
For a sequence of \Omega\Gamma is the number of edges in the initial
graph the following amortized expected update times and worst-case query times are achieved:
1. connectivity in update time O(log 3 n) and query time O(log n= log log n);
2. bipartiteness in update time O(log 3 n) and query time O(1);
3. minimum spanning tree of a graph with k different weights in update time O(k log 3 n);
As an immediate consequence of these results we achieve faster fully dynamic algorithms for
the following problems:
1. An algorithm to maintain a 1+ffl-approximation of the minimum spanning tree in expected
time O((p log 3 n log U)=ffl) for p updates, where the weights of the edges are between 1 and U .
2. An algorithm for the k-edge witness problem ("does the removal of the given k edges disconnect
the graph?") in update time O(log 3 n) and amortized expected query time O(k log 3 n).
Note that cycle-equivalence is equivalent to the 2-edge witness problem.
3. A fully dynamic algorithm for maintaining a maximal spanning forest decomposition of order
k of a graph in time O(k log 3 n) per update by keeping k fully dynamic connectivity data
structures.
A maximal spanning forest decomposition of order k is a decomposition of a graph into k
edge-disjoint spanning forests F such that F i is a maximal spanning forest of G
. The maximal spanning forest decomposition is interesting since [ i F i is a graph
with O(kn) edges that has the same k-edge connected components as G [15].
Additionally we use the data structures to present simple deterministic algorithms that maintain
minimum spanning trees and connectivity fully dynamically. The amortized time per update for
the minimum spanning tree algorithm is O(
log n), which can be improved to O(
log n) using
the sparsification technique [4]. The amortized time per update for the connectivity algorithm is
O(
log n). Even though these algorithms do not improve on the running time of the best known
algorithms, they are interesting since they present a completely different approach than previous
algorithms and use only simple data structures. Additionally, the connectivity algorithm is the first
fully dynamic algorithm that does not use the sparsification technique and achieves a running time
of less than
m).
Main Idea. The new technique is a combination of a novel decomposition of the graph and
randomization. The edges of the graph are partitioned into O(log n) levels such that edges in
highly-connected parts of the graph (where cuts are dense) are on lower levels than those in loosely-
connected parts (where cuts are sparse). For each level i, a spanning forest is maintained for the
graph whose edges are in levels i and below. If a tree edge is deleted at level i, we sample edges on
level i such that with high probability either (1) we find an edge reconnecting the two subtrees or
(2) the cut defined by the deleted edge is too sparse for level i. In Case (1) we found a replacement
edge fast, in Case (2) we copy all edges on the cut to level recurse on level i + 1.
To our knowledge the only previous use of randomization in fully dynamic algorithms is are
(Monte-Carlo type) approximation algorithms for minimum cuts [12, 14].
This paper is structured as follows: Section 2 gives the fully dynamic connectivity algorithm,
Section 3 presents the results for k-weight minimum spanning trees, 1+ffl-approximate minimum
spanning trees, and bipartiteness. Section 4 and 5 contain the deterministic algorithms.
Randomized Connectivity Algorithm
2.1 A Deletions-Only Connectivity Algorithm
Definitions and notation: Let E) with jV m. We use the convention
that elements in V are referred to as vertices. Let l 1. The edges of G are
partitioned into l levels l such that [ ;. For each
i, we keep a forest F i of tree edges such that F i is a spanning forest of (V; [
l is a spanning tree of G and edges in E n F are referred to
as nontree edges. A spanning tree T on level i is a tree of F i .
All nontree edges incident to vertices in T are stored in a data structure that is described in
more detail below. The weight of T , denoted w(T), is the number of nontree edges incident to the
spanning tree, where edges whose both endpoints lie in the spanning tree are counted twice. The
size of T , denoted s(T), is the number of vertices in T . A tree is smaller than another tree if its
size is no greater than the other's. We say level i is below level i + 1.
2.1.1 The Algorithm
Initially all edges are in E 1 , and we compute F 1 which is a spanning tree of G.
When an edge e is deleted, remove e from the graph containing it. If e is a tree edge, let i be
the level such that e 2 E i . Call Replace(e; i).
Let T be the level i tree containing edge e and let T 1 and T 2 be the two subtrees of T that resulted
from the deletion of e, such that s(T 1
2.
ffl Sample: We sample c log 2 m nontree edges of E i incident to vertices of T 1 for some appropriate
constant c. An edge with both endpoints in T 1 is picked with probability 2=w(T 1 ) and an
edge with one endpoint in T 1 is picked with probability 1=w(T 1 ).
Case 1: Replacement edge found: If one of the sampled edges connects T 1 to T 2 then add it
to all F j , j - i.
ffl Case 2: Sampling unsuccessful: If none of the sampled edges connects T 1 and T 2 , search all
edges incident to T 1 and determine g.
choose one element of S and add it to F j , j - i.
remove the elements of S from E i , and insert them into
add one of the newly inserted edges to F j , j ? i.
- If l then do
2.1.2 Proof of Correctness
We first show that all edges are contained in [ i-l E i , i.e., when Replace(e; l) is called, and Case 2
occurs, no edges will be inserted into E l+1 . We use this fact to argue that if a replacement edge
exists, it will be found.
be the number of edges ever in E i .
Lemma 2.1 For all smaller trees T 1 on level i,
Proof: The proof follows [6]. When a tree is split into two trees and an endpoint of an
edge is contained in the smaller tree, the size of the tree which contains that endpoint has
been halved. Thus, over the course of the algorithm, each endpoint of an edge is inci dent
to a smaller tree at most log n times in a given level, and, for all such trees T 1 on level i,
Lemma 2.2 For any i, m i - m=c 0i\Gamma1 .
Proof: We show the lemma by induction. It clearly holds for Assume it holds for
summed over all smaller trees T 1 , P
log n) edges are added to E i .
By Corollary 2.1, P
is the total number of edges ever in level
implies that the total number of edges in E i is no greater than m=c
Choosing c observing that edges are never moved to a higher level from a level with
less than 2 log n edges gives the following corollary.
Corollary 2.3 For are contained in some E i , i - l.
The following relationship, which will be useful in the running time analysis, is also evident.
Corollary 2.4 P
Theorem 2.5 F i is a spanning forest of (V; [
Proof: Initially, this is true, since we compute F 1 to be the spanning tree of
Consider the first time it fails, when a tree edge e is deleted, and a replacement edge
exists but is not found. By Corollary 2.3, the replacement edge lies in some
Let i be the minimum level at which a replacement edge exists. Let e be in E k . Then
we claim i - k. Assume not. Then let fr; sg be a replacement edge at level i. Since r and s
are connected, there is a path from r to s in F i . Since e 2 F k , e is not in this path. Hence,
sg is not a replacement edge for e. Thus k - i and Delete(e; i) will be called.
Either a replacement edge will be found by sampling or every edge incident to T 1 will be
examined. We claim that every replacement edge fr; sg is incident to T 1 . Suppose it is not.
By assumption, there is a path from r to s in [ j-i F j . If this path includes e then either r
or s is in T 1 . If it doesn't include e, then fr; sg forms a cycle with F \Gamma e contradicting the
assumption that fr; sg is a replacement edge.
2.1.3 The Euler Tour Data Structure
In this subsection we present the data structure that we use to implement the algorithm of the
previous section efficiently. We encode an arbitrary tree T with n vertices using a sequence of
symbols, which is generated as follows: Root the tree at an arbitrary vertex. Then call ET (root),
where ET is defined as follows:
visit x;
for each child c of x do
visit x.
Each edge of T is visited twice and every degree-d vertex d times, except for the root which is
visited times. Each time any vertex u is encountered, we call this an occurrence of the vertex
and denote it by o u .
New encodings for trees resulting from splits and joins of previously encoded trees can easily
be generated. Let ET (T ) be the sequence representing an arbitrary tree T .
Procedures for modifying encodings
1. To delete edge fa; bg from T : Let T 1 and T 2 be the two trees which result, where a 2 T 1
encountered in the two traversals
of fa; bg. If
and
then
. Thus ET (T 2 ) is given by
the interval of ET (T ) splicing out of ET (T ) the sequence
2. To change the root of T from r to s: Let any occurrence of s. Splice out the first
part of the sequence ending with the occurrence before remove its first occurrence (o r ),
and tack it on to the end of the sequence which now begins with . Add a new occurrence
s to the end.
3. To join two rooted trees T and T 0 by edge e: Let Given
any occurrences o a and create a new occurrence o an and splice the sequence
an into ET (T ) immediately after o a .
If the sequence ET (T ) is stored in a balanced search tree of degree b, and height O(log n= log b)
then one may insert an interval or splice out an interval in time O(b log n= log b), while maintaining
the balance of the tree, and determine if two elements are in the same tree, or if one element
precedes the other in the ordering in time O(log n=b).
Aside from lists and arrays, the only data structures used in the connectivity algorithm are
trees represented as sequences which are stored in balanced b-ary search trees. We next describe
these data structures.
Data structures: We have two options for storing the nontree edges: the first is simpler and is
explained here. The second shaves off a factor of log log n from the update time by reducing the
cost of sampling. It is described in the last subsection of this section.
For each spanning tree T on each level i ! l, each occurrence of ET (T ) is stored in a node of
a balanced binary search tree we call the ET(T)-tree. For each tree T on the last level l, ET (T )
is stored in a balanced (log n)-ary search trees. Note that there are no nontree edges on this level.
For each vertex arbitrarily choose one occurrence to be the active occurrence of u.
With the active occurrence of each vertex v, we keep an (unordered) list of nontree edges in
level i which are incident to v, stored as an array. Each node in the ET-tree contains the number of
nontree edges and the number of active occurrences stored in its subtree. Thus, the root of ET (T )
contains the weight and size of T .
In addition to storing G and F using adjacency lists, we keep some arrays and lists:
ffl For each vertex and each level, a pointer to the vertex's active occurrence on that level.
ffl For each tree edge, for each level k such that e 2 F k , pointers to each of the four (or three, if
an endpoint is a leaf) occurrences associated with its traversal in F k ;
ffl For each nontree edge, pointers to the locations in the two lists of nontree edge containing
the edge and the reverse pointers.
ffl For each level i, a list containing a pointer to each root of a ET(T)-tree, for all spanning trees
T at level i, and for each root a pointer back to the list;
ffl For each level i, a list of tree edges in E i and for each edge a pointer back to its position in
the list.
2.1.4 Implementation
Using the data structures described above, the following operations can be executed on each spanning
tree on each level. Let T be a spanning tree on level i.
ffl tree(x; i): Return a pointer to ET (T ) where T is the spanning tree of level i that contains
vertex x.
ffl nontree edges(T Return a list of nontree edges stored in ET (T ); each edge is returned once
or twice.
Randomly select a nontree edge of E i that has at least one endpoint in T ,
where an edge with both endpoints in T is picked with probability 2=w(T ) and an edge with
exactly one endpoint in T is picked with probability 1=w(T ). Test if exactly one endpoint is
in T , and if so, return the edge.
ffl insert tree(e; i): Join by e the two trees on level i, each of which contains an endpoint of e.
ffl delete tree(e; i): Remove e from the tree on level i which contains it.
ffl insert nontree(e; i): Insert the nontree edge e into E i .
ffl delete nontree(e): Delete the nontree edge e.
The following running times are achieved using a binary search tree: tree, sample&test,
insert non tree, delete non tree, delete tree, and insert tree in O(log n) and nontree edges(T ) in
O(m 0 log n), where m 0 is the number of moved edges. On the last level l, when a (log n)-ary tree
is used, the running time of delete tree and insert tree is increased to O(log 2 n= log log n) and the
running time of tree is reduced to O(log n= log log n). We describe the implementation details of
these operations.
tree(x,i): Follow the pointer to the active occurrence of x at level i. Traverse the path in the
ET (T )-tree from the active occurrence to the root and return a pointer to this root.
nontree edges(T): Traverse all nodes of ET (T ) and output every non-empty list of nontree edges
encountered at a node.
Let T be a level i tree. Pick a random number j between 1 and w(T ) and find
the j th nontree edge fu; vg stored in the ET (T ). If tree(u; l) 6= tree(v; l) then return the edge.
insert tree(e,i) Determine the active occurrences of the endpoints of e on level i and follow
Procedure 3 for joining two rooted trees, above. Update pointers to the root of the new tree and
the list of tree edges on level i.
delete tree(e,i): Let vg. Determine the four occurrences associated with the traversal of
e in the tree on level i which contains e and delete e from it, following Procedure 1, above. Update
pointers to the roots of the new trees, the list of tree edges, and (if necessary) the active occurrences
of u and v.
insert nontree(e,i): Determine the active occurrences of the endpoints of e on level i and add e
to the list of edges stored at them.
delete nontree(e): Follow the pointers to the two locations of e in lists of non-tree edges and
remove e.
Using these functions, the deletions only algorithm can be implemented as follows.
To initialize the data structures: Given a graph G compute a spanning forest of G. Compute
the ET (T ) for each T in the forest, select active occurrences, and set up pointers as described
above. Initially, the set of trees is the same for all levels. Then insert the nontree edges with the
appropriate active occurrences into level 1 and compute the number of nontree edges in the subtree
of each node.
To answer the query: "Are x and y connected?": Test if tree(x;
To update the data structure after a deletion of edge e=fu,vg: If e is in E i , then do
delete tree(e;
If e is not a tree edge, execute delete nontree(e).
2.
Repeat times.
Case 1: Replacement edge e 0 is found
delete nontree(e 0 );
Case 2: Sampling unsuccessful
for each edge fu; vg 2 nontree
if tree(u; l) 6= tree(v; l) then add fu; vg to S.
fedges with exactly one endpoint in T 1 g g.
Case 2.1: jSj - w(T 1 )=(2c 0 log n)
Select one e
Case 2.2:
Choose one edge e 0 2 S and remove it from
for every edge e 00 2 S do delete non tree(e 00 ); insert nontree(e 00
Case 2.3:
2.1.5 Analysis of Running Time
We show that the amortized cost per deletion is O(log 3 n) if there are m deletions.
In all cases where a replacement edge is found, O(log n) insert tree operations are executed,
costing O(log 2 n). In addition:
Case 1: Sampling is successful. The cost of sample&test is O(log n) and this is repeated
O(log 2 n) times, for a total of O(log 3 n).
Case 2: Sampling is not successful or w(T 1 We refer to the executing nontree
and the testing of each edge as the cost of gathering and testing the nontree edges. The first operation
costs O(log n) per nontree edge and the second is O(log n= log log n) per nontree edge, for a
total cost of O(w(T 1 ) log n). Now there are three possible cases.
Case 2.1: jSj - w(T 1 )=(2c 0 log n). If w(T 1 the cost of gathering and
testing to the delete operation. Otherwise, the probability of this subcase occurring is
and the total cost of this case is O(w(T 1 ) log n). Thus
this contributes an expected cost of O(log n) per operation.
Case 2.2 and 2.3: jSj ! w(T 1 )=(2c 0 log n) Each delete nontree, insert nontree, and tree costs
O(log n), for a total cost of O(w(T 1 ) log n). In this case tree(u; i) and tree(v; i) are not reconnected.
Note that only edges incident to the smaller tree T 1 are gathered and tested. Thus, over the course
of the algorithm, the cost incurred in these cases on any level i is O(
where the sum
is taken over all smaller trees T 1 on level i. From Lemma 2.1, we have the P
From Corollary 2.4, P
giving a total cost of O(m log 2 n).
2.2 A Fully Dynamic Connectivity Algorithm
2.2.1 The Algorithm
Next we also consider insertions. When an edge fu; vg is inserted into G, add fu; vg to E l . If u
and v were not previously connected in G, i.e., tree(u; l) 6= tree(v; l), add fu; vg to F l .
We let the number of levels l = d2 log ne. A rebuild of the data structure is executed periodically.
A rebuild of level i, for i - 1, is done by a move edges(i) operation, which moves all tree and nontree
edges in E j for Also, for each j ! i, all tree edges in E j are inserted into all F k ,
that after a rebuild at level i, contains no edges, and F
i.e., the spanning trees on level j - i span the connected components of G.
After each insertion, we increment I, the number of insertions modular 2 d2 log ne since the start
of the algorithm. Let j be the greatest integer k such that 2 k jI. After an edge is inserted, a rebuild
of level l executed. If we represent I as a binary counter whose bits are b
b 0 is the most significant bit, then a rebuild of level i occurs each time the i th bit flips to 1.
2.2.2 Proof of Correctness
The proof of correctness is the same as the one for the deletions-only case, except that we must
set the value of l to d2 log ne and alter the argument which shows that all edges are contained in
We define an i-period to be any period beginning right after a rebuild of levels j - i, or the
start of the algorithm and ending right after the next rebuild of level j 0 - i. I.e., an i-period starts
right after the flip to 1 of some bit j - i, or the start of the algorithm and ends with next such
flip. Note that there are two types of i-periods: (A) begins immediately after a rebuild of level
all edges from E i are moved to some E j with flips to 1); (B) begins
immediately after a rebuild of level i, i.e. all edges from [ are moved into E i (b i flips to 1).
It is easy to see that an i-1-period consists of two parts, one type A i-period, followed by one
since any flip to 1 by some bit b j , must be followed by a flip of b i to 1
before a second bit b j 0 , flips to 1.
Theorem 2.6 Let a i be the number of edges in E i during an i-period. Then a
Proof: By a proof analogous to that of Lemma 2.1 we have:
Lemma 2.7 For all smaller trees T 1 on level i which were searched between two consecutive
rebuilds of levels
We now bound a i . Note that we may restrict our attention to the edges which are moved
during any one i-1-period since E i is empty at the start of each i-1 period.
Thus, an edge is in E i either because it was passed up from E during one i-1-period
or moved there in a single rebuild of level i.
was empty at the start of the i-1-period, any edge moved to E i during
the rebuild of level i was passed up from E i\Gamma1 to E i and then to E i\Gamma1 during the type
A i-period (i.e., the first part of the i-1-period) or was inserted into G during the type A
i-period. We have
a
where h i\Gamma1 is the maximum number of edges passed up from E i\Gamma1 to E i during a single
(i.e., an A i-period followed by a B i-period) and b i is the number of edges
inserted into G during a single i-period.
The number of edges inserted into G during an i-period is 2 l\Gammai\Gamma1 .
To bound h i we use Lemma 2.7 to bound P
summed over all smaller trees T 1
which are searched on level during an As in the proof of Lemma 2.2, we
can now bound h. Choosing
Substituting for h and b yields
a
Choosing log ne and noting that a 1 ! n 2 , an induction proof shows that a i !
This implies a l ! 2 and edges are never passed up to a l+1 . We have:
Corollary 2.8 For log ne all edges of E are contained in some E i , i - l.
2.2.3 Analysis of the Running Time.
To analyze the running time, note that the analysis of Case 1 and Case 2.1, above, are not affected
by the rebuilds. However, (1) we have to bound the cost incurred during an insertion, i.e. the cost
of the operation move edges and (2) in Case 2.2 and 2.3, the argument that O(m i log n) edges are
gathered and tested (using nontree edges and tree) on level i during the course of the algorithm
must be modified.
The cost of (1), i.e. the cost of executing move edges(i) is the cost of moving each tree edge
and each nontree edge in and the cost of updating all the F k ,
To analyze the first part, we note that each move of an edge into E i costs O(log n) per edge.
The number of edges moved is no greater than P
Thus the cost incurred is
The cost of inserting one tree edge into any given level is O(log n) per edge. A tree edge is
added only once into a level, since a tree edge is never passed up. Thus this cost may be charged
to the edge for a total cost of O(log 2 n) per edge.
We analyze the cost of (2).
For been inserted since the start of the algorithm then no
rebuilds have occurred at level i or lower, and the analysis for level i of the deletions-only argument
holds. That is, the costs incurred on level i is bounded above by O(m log 2 n=c is the
number of edges in the initial graph.
Applying Lemma 2.7, we conclude that the cost for the gathering and testing of edges from all
smaller trees T 1 on level i during an i-period is O(2a i log n log
For l, we note that since there are O(1) edges in E l at any given time, and since the cost is
O(log n) per edge for gathering and testing, the total cost for each instance of gathering and testing
on this level is O(log n).
We use a potential function argument to charge the cost of (1) and (2) to the insertions. Each
new insertion contributes c 00 log 2 n tokens toward the bank account of each level, for a total of
\Theta(log 3 n) tokens. Since an i-period occurs every n 2 =2 i insertions, the tokens contributed by these
insertions can pay for the O(n 2 log 2 n=2 i ) cost of the gathering and testing on level i during the
i-period and the O(n 2 log n=2 i ) cost of move edges(i) incurred at most once during the i-period.
2.3 Improvements
In this section, we present a simple "trick" which reduces the cost of testing all nontree edges
incident to a smaller tree T 1 to O(1) per edge so that the total cost of gathering and testing edges
incident to T 1 is O(1) per edge.
2.3.1 Constant Time for Gathering and Testing
As noted above, since the nontree edges incident to an ET-tree are available as a list, the time
needed to retrieve these edges is O(1) per edge. One can also test each nontree edge in O(1) time,
i.e., determine the set S of all nontree edges which contain only one endpoint, by running through
the list three times. For each edge in the list, initialize the entry of an n \Theta n array. Then use
these entries to count the number of times each edge appears in the list. Traverse the list again
and add to S any edge whose count is one.
2.3.2 Constant Query and Test Time for Deletions-Only Algorithms
We note that determining whether two vertices i and j are in the same component, i.e., is
tree(j), can be speeded up to O(1) for the deletions-only algorithm. A component is split when
Replace(e; l) is called and no replacement edge is found. In that case, label the nodes of the smaller
component T 1 with a new label. The cost of doing so is proportional to the size of T 1 . Over the
course of the algorithm, the cost is O(n log n) since each node appears in a smaller component no
more than log n times. Then have the same label.
This improvement does not affect the asymptotic running time of the randomized connectivity
algorithm, as that is dominated by the cost of the random sampling. However it will be used in
the deterministic algorithms presented later in the paper.
Randomized Algorithms for Other Dynamic Graph Problems
In this section, we show that some dynamic graph problems have polylogarithmic expected up-date
time, by reducing these problems to connectivity and we give an alternative algorithm for
maintaining the minimum spanning tree.
3.1 A k-Weight Minimum Spanning Tree Algorithm
The k-weight minimum spanning tree problem is to maintain a minimum spanning forest in a
dynamic graph with no more than k different edgeweights at any given time.
E) be the initial graph. Compute the minimum spanning forest F of G. We define
a sequence of subgraphs G
fedges with weight of rank ig [ F . If initially, there are l ! k distinct edgeweights, then
are called "extras". The spanning forests of each G i are maintained as
in the connectivity algorithm. These forests and F are also stored in dynamic trees. The subgraphs
are ordered by the weight of its edgeset and stored in a balanced binary tree.
To insert edge fu; vg into G: determine if u and v are connected in F . If so, find the maximum
cost edge e on the path from u to v in F . If the weight of e is greater than the weight of fu; vg,
replace e in F by fu; vg. If u and v were not previously connected, add fu; vg to F . Otherwise, just
add fu; vg to E j where j is the rank of the weight of fu; vg. If fu; vg is the only edge of its weight
in G, then create a new subgraph by adding fu; vg to an extra and inserting it into the ordering of
the other G i . Update the E i to reflect the changes to F .
To delete edge fu; vg from G: Delete fu; vg from all graphs containing it. To update F : If
fu; vg had been in F , then a tree T in F is divided into two components. Find the minimum i
such that u and v are connected in G i using binary search on the list of subgraphs. Now, search
the path from u to v in the spanning forest of G i to find an edge crossing the cut in T . Use binary
Let x be a midpoint of the path. Recurse on the portion of the path between u and x if u
and x are not connected in F ; else recurse on the path between x and v.
Correctness: When an edge fu; vg is inserted and its cost is not less than the cost of the
maximum cost edge on the tree path between u and v, then the minimum spanning forest F is
unchanged. If the cost of fu; vg is less than the cost of the maximum cost edge e 0 on the tree path
between u and v, then replacing e 0 by fu; vg decreases the cost of the minimum spanning tree by
the maximum possible amount and gives, thus, the minimum spanning tree of G [ fu; vg.
Analysis of Running Time: The algorithm (1) determines how F has to be changed, and (2)
updates the data structures. (1) After an insertion the maximum cost edge on the tree path between
u and v can be determined in time O(log n) using the dynamic tree data structure of F . After a
deletion, it takes time O(log 2 n= log log n) to find the minimum i such that u and v are connected
in G i , since a connectivity query on a level takes time O(log n= log log n). The midpoint of the
tree path between u and v in G i can be determined in time O(log n) using the dynamic tree data
structure of the spanning tree of G i . The algorithm recurses at most log n times to determine the
replacement edge for fu; vg, for a total of O(log 2 n).
(2) The insertion or deletion of fu; vg into E i , where i is the rank of the weight of fu; vg, takes
amortized expected time O(log 3 n). If F changes, one additional insertion and deletion is executed
in every E j . For each update there are a constant number of operations in the dynamic tree of F
and of the spanning tree of every E j , each costing O(log n). Thus, the amortized expected update
time is O(k log 3 n).
3.2 ffl-Approximate Minimum Spanning Tree Algorithm
Given a graph with weights between 1 and U , a 1+ffl-approximation of the minimum spanning tree is
a spanning tree whose weight is within a factor of 1+ffl of the weight of the optimal. The problem of
maintaining a 1+ffl approximation is easily seen to be reducible to the k-weight MST problem, where
a weight has rank i if it falls in the interval [(1
This yields an algorithm with amortized cost O((log 3 n log U)=(ffl log log n)).
3.3 A Bipartiteness Algorithm
The bipartite graph problem is to answer the query "Is G bipartite?" in O(1) time, where G is a
dynamic graph.
We reduce this problem to the 2-weight minimum spanning tree problem. We use the fact that
a graph G is bipartite iff given any spanning forest F of G, each nontree edge forms an even cycle
with F . Call these edges "even edges" and the remaining edges "odd". We also use the fact that
if an edge e in F is replaced with an even edge then the set of even edges is preserved. Let C be
the cut in F induced by removing e. If e is replaced with an odd edge then for each nontree edge
e 0 which crosses C the parity of e 0 changes. We replace an edge by an odd replacement edge only
if there does not exists an even replacement edge. Thus, the parity of an even edge never changed.
F is stored as a dynamic tree.
Our algorithm is: generate a spanning forest F of the initial graph G. All tree and even nontree
edges have weight 0. Odd edges have weight 1. If no edges have weight 1, then the graph is
bipartite.
When an edge is inserted, determine if it is odd or even by using the dynamic tree data structure
of F , and give it weight 1 or 0 accordingly.
When an edge is deleted, if it is a tree edge, and if it is replaced with an odd edge (because
there are no weight 0 replacements), remove the odd edge and find its next replacement, remove
that, etc. until there are no more replacements. Then relabel the replacement edges as even and
add them back to G.
Correctness: When an edge is inserted, the algorithm determines if it is even or odd. If an edge
is deleted, we replace it by an even edge if possible. This does not affect the parity of the remaining
edges. If no even replacement edge exists, but an odd replacement edge, the parity of every edge
on the cut changes. However, since no even edge exists on the cut, it suffices to make all odd edges
into even edges.
Analysis of Running Time: An even edge never becomes odd. Thus, the weight of an edge
changes at most once, which shows that an insertion of an edge causes the edge to be added to the
data structure at most once with weight 1 and at most once with weight 0. The deletion of an edge
leads to the removal of the edge from the data structure. Thus, amortized expected update time is
O(log 3 n).
4 A Deterministic Minimum Spanning Tree Algorithm
In this section we present a fully dynamic deterministic minimum spanning tree algorithm with
amortized time O(
log n) per operation, where m is the current number of edges in G. We
note that the technique of sparsification [4], when applied on top of this algorithm, will yield an
algorithm with O(
log n) amortized time per operation.
4.1 A Deletions-Only Minimum Spanning Tree Algorithm
The data structures which we use are very similar to those of the randomized connnectivity algo-
rithm. Instead of random sampling, we always exhaustively search edges incident to the smaller
components. Edges are stored in p
levels, according to their weight and do not move between
levels. A deletions-only data structure is kept for "old" edges. Newly inserted edges are kept in
a separate level. Periodically, a rebuild is executed in which the deletions-only data structure is
revised to include the newly inserted edges.
E) with jV m. Rank the edges by weight. The edges of G are
partitioned into l levels l such that E i for i - l contains all edges with rank j, where
m. Compute a minimum spanning forest F of G. For each i, we keep a forest
F i of tree edges such that F i is a minimum spanning forest of (V; [
. Note that F i n F
4.1.1 The Algorithm
To update the data structure after a deletion of edge e=fu,vg: If e is not a tree edge,
then do delete nontree(e). If e is a tree edge in E i , then do delete tree(e;
Replace(u; v; i).
else
Gather and test nontree edges incident to T 1
Case 1: Replacement edge is found
Let e 0 be a minimum weight replacement edge;
Case 2: No Replacement edge is found
4.1.2 Implementation and Analysis of Running Time
We use the same data structure as in the randomized algorithm, storing all nontree edges in E i in
the appropriate ET-tree on level i. Each ET-tree is stored as a binary tree; all nontree edges are
stored in NT-lists, so that gathering and testing of all edges incident to T 1 may be done in w(T )
time. (See Improvements Section 2.3.1, above.)
When a tree edge is deleted and a replacement edge is sought, the total cost of gathering and
testing of nontree edges in a given level i is charged to the deletion, if a replacement edge is found
in or to the level, if not. We consider the latter case first. Then T 1 becomes a new component
of F i . The cost of gathering and testing edges incident to T 1 is O(w(T 1 )). On a given level,
log n (see Lemma 2.1). Over all levels, the total cost is O(m log n).
The cost of gathering and testing in the level in which the replacement edge is found is no
greater than the cost of gathering and testing every edge in that level, or O(
m).
In addition, for each tree edge, no more than p
insert tree operations are executed, once per
level, for a total cost of O(
log n). The cost of delete nontree is O(log n).
The cost is thus O(
log n) per edge deletion plus a total cost of O(m log n) during the course
of the algorithm.
4.2 A Fully Dynamic Minimum Spanning Tree Algorithm
4.2.1 The Algorithm
A partial rebuild of the data structure is executed after p
is the number
of edges which were in the graph at the last rebuild, or initially, if there has been no rebuild. These
edges are referred to as old edges and, if they are still in G, are stored in the deletions-only data
structure described above. All newly inserted edges are stored in a separate set E l+1 . Then F l
is a minimum spanning tree consisting only of the old edges, i.e., of G(V; [ F be a
minimum spanning tree of G(V; E), i.e., of old and new edges. We store F in a dynamic tree data
structure. For each E i with i - l we keep a list of its edges ordered by rank.
To rebuild: When the number of updates (insertions or deletions) since the last rebuild or the
start of the algorithm reaches p
reset to the current number of edges in the graph.
Insert each edge in E l+1 into an appropriate E i , for i - l and redistribute the old edges among the
so that for i - l, E i contains all edges with rank j, where (i \Gamma 1) p
All edges are
now "old edges". If tree edge of level i moves to level i 0 then add the edge to all levels
or remove the edge from all levels i
To delete an edge e from G: If e is an old tree edge, then update the deletions-only data
structure. If e does not belong to F , stop. Otherwise, delete vg from F and find the
minimum cost edge e 0 with exactly one endpoint in T 1 in E l+1 . Add to F the smaller-cost edge of
e 0 and of the replacement edge in the deletions only data structure (if existent).
To insert an edge e to G: Add the edge to E l+1 and test if it replaces an edge of F . That is,
use the dynamic tree to find the heaviest edge in the path between e's endpoints, if there is such a
path, and determine if e is lighter. If yes, modify F accordingly.
4.2.2 Analysis of Running Time
The cost of an edge insertion is O(log n), since it consists of constant number of operations on the
dynamic tree data structure.
The cost of an edge deletion may include the cost of updating the dynamic tree data structure,
which is O(log n); the cost of testing edges in E l+1 , which is O(
log n); and the cost of updating
the deletions-only data structure. We show how to bound the latter.
The cost involved in the deletions-only data structure between rebuilds is given by the analysis
in Section 4.1.2 : O( p
per deletion of a tree edge plus an overall cost of O(m 0 log n)
between rebuilds, which is O(
log n) per update operation.
We bound next the cost of a rebuild. First we discuss adding the edges to the appropriate
level, then we discuss "rebalancing" the size of a level. Determining the appropriate level for an
edge takes time O(log n) using a binary search. Moving a nontree edge from E l+1 to E i with i - l
costs O(log n) per edge, for a total of O(
Moving a tree edge from E l+1 to E i with
requires the insertion of the edge into all levels j - i and costs O(
per edge, for a
total of O(m 0 log n). Any edge in E i , for i - l, is never moved by more than one level during the
"rebalancing", i.e., either into E This costs O(log n) per tree or non-tree edge. Thus
the time of a rebuild is O(m 0 log n).
The cost of a rebuild, when amortized over the p
which occur, is O( p
be the number of edges currently in the graph when an operation occurs. Since
the cost of this algorithm per edge deletion or insertion is O(
log n).
5 A Deterministic Connectivity Algorithm
The above minimum spanning tree algorithm can be easily converted into an algorithm for connectivity
with the same running time. In this section, we give an alternative connectivity algorithm
with O(
log n) amortized update time without the use of sparsification. The data structure used
here is similar to those used in the previously presented algorithms. There are two important dif-
ferences. As in the previous algorithm, edges are partitioned into levels Here, each level
contains at most p
edges, and levels are added as needed. That is, initially,
levels with
edges each are created. New edges are inserted into the highest level if it contains
less than p
edges, and into a newly created (empty) highest level, otherwise. The data structure
as a whole is never rebuilt; new levels are added one at a time. If all edges in a level have been
deleted, the level is discarded. We use the term level i to denote the ith non-empty level. As in the
previous algorithm, edges do not move between levels.
The F i are defined as before: For each i, we keep a forest F i of tree edges such that F i is
a spanning forest of (V; [ Here, the F i
are stored in "relaxed" versions of ET-trees, called RET-trees. Unlike the ET-trees used in the
previous algorithms, RET-trees have the following property: If an edge on level i is deleted and a
replacement edge is found on level j (i ! j), then only the RET-trees on levels
to be updated. This allows us to keep a larger number of levels.
5.1 The RET-Tree Data Structure
Let T be a spanning tree on level i and let C(T ) be the tree that is created from T by contracting
every spanning tree on level to one node. Each node of C(T ) is called a big node. Thus, all
edges of C(T ) belong to F i .
An RET(T)-tree consists of an ET-tree for the sequence ET (C(T )) where the list of edges at
the active occurrence of a vertex a is replaced
ffl A list Nodes(a) of all the vertices of G that have been contracted to form a, in the following
first all vertices that are incident to edges in E i , then all other vertices. At each vertex
v of G in Nodes(a) we keep (1) a list of all the edges of F i incident to v, (2) a list of all the
nontree edges incident to v, (3) the number of nontree edges incident to v, and (4) a pointer
to a.
ffl The number of edges incident to the vertices in Nodes(a).
In addition, we keep an ordered list of nonempty levels stored in a doubly linked list L and, for
each nonempty level, an array whose jth entry is a pointer to the location of the vertex j in the
Nodes-list on level i which contains j.
The operations for RET-trees are the same as those defined for ET-trees, with the omission of
sample&test, and the addition of two more operations defined below:
ffl big node(u; i): Return the big node on level i that contains vertex u.
The operation assumes that e is a deleted edge of F i , whose
endpoints are (still) contained in the same big node a on level i + 1. Let T 1 and T 2 be the
spanning trees of level i containing the endpoints of e, where T 1 is the smaller of these two
trees. Split a into two big nodes that represent
To implement big node(u; i), follow the pointer from level i, vertex u to the Nodes-list on level
which contains u, and the pointer from there to the big node containing u. Each call to big node
is dominated by the cost of finding level i in L, which takes time O(log n).
To implement split big the smaller of the two subtrees and let
vg. By traversing ET (C(T 1 )) on level i determine the set U of all vertices of G in T 1 .
Using big one node of U , determine the big node a that represents all nodes of
U . Remove each node in U from Nodes(a), create a new node b, and add all nodes of U into a new
(ordered) Nodes(b) list. Then update the number of edges incident to the nodes in Nodes(a) and
in Nodes(b) accordingly. A call to split big node takes time proportional to the number of vertices
in T 1 plus O(log n) for the call to big node.
The implementation of the ET-tree operations need only be modified slightly, and each has a
running time of O(log n). That is, in insert nontree(T ; e) and delete nontree(e) an endpoint of e
may be moved to the beginning or the end of the Nodes-list containing it. The implementation
of tree(x; i), nontree edges(T ), insert tree(e; i), and delete tree(e; i) are unchanged, except for a
constant number of calls to big node.
5.2 The Algorithm
To update the data structure after a deletion of edge e=fu,vg: If e is not a tree edge, execute
delete nontree(e). If e is a tree edge on level i, then delete tree(e; i) and call Replace(u; v; i).
if
Gather and test nontree edges incident to T 1
Case 1: Replacement edge e 0 is found
delete nontree(e 0 );
insert tree(e
Case 2: No Replacement edge is found
Let j be the next lowest nonempty level above i.
split big node(e; j).
Replace(u; v; j).
To update the data structure after an insertion of edge e=fu,vg: If the number of edges
in level(l) is p
then build a data structure for a new level and reset l to this new level. If u and
are connected then call insert nontree(e; tree(u; l)) else insert tree(e; l).
5.3 Implementation and Analysis of Running Time
To gather and test non-tree edges we store the nontree edges in NT-lists (see Section 2.3.1) so that
gathering and testing of all edges incident to T 1 may be done in time O(w(T 1 is the
number of nontree edges incident to T 1 .
We show that the amortized time per update operation is O(
log n). Since building a data
structure for a new level takes time O(n) and occurs every p
n insertions, the total time for all
insertions charge O(n=
n) to each insertion. Next we show that the total cost of all
deletions is O((m
log n), where m 0 is the number of edges in the initial graph and k is the
total number of update operations.
The search for a replacement edge takes time O(w(T 1 )). We separately consider the cost of
the search on the last level in which the search occurred (and either terminated successfully or
discontinued because there were no more levels) and the cost on the other levels where the search
occurred. The total cost of searches for the deletions on the last level is no more than p
per
search or O((m 0
log n). The total cost of searching on all levels where no replacement edge
is found is O(
summed over all T 1 which are created, on all levels. Applying Lemma 2.1,
we have a total cost of O((m
log n).
The cost of split big node can be charged to the the nodes in T 1 , for a total cost of n log n per
level. Since at most O((m
n) levels are created during the whole course of the algorithms,
the total cost of split big node is O((m
log n).
Thus the cost of update operations, when amortized over all updates is O(
log n), for
6
Acknowledgements
We are thankful for David Alberts for comments on the presentation.
--R
"Average Case Analysis of Dynamic Graph Algo- rithms"
"Main- tenance of a Minimum Spanning Forest in a Dynamic Planar Graph"
"Improved Sparsification"
"Sparsification - A Technique for Speeding up Dynamic Graph Algorithms"
"Separator Based Sparsification for Dynamic Planar Graph Algorithms"
"An On-Line Edge-Deletion Problem"
"Data Structures for On-line Updating of Minimum Spanning Trees"
"Ambivalent Data Structures for Dynamic 2-Edge-connectivity and k Smallest Spanning Trees"
"Lower Bounds for Fully Dynamic Connectivity Problems in Graphs"
"Fully Dynamic Algorithms for 2-Edge Connectivity"
"Fully Dynamic Cycle-Equivalence in Graphs"
"Approximating Minimum Cuts under Insertions"
"Sparse Certificates for Dynamic Biconnectivity in Graphs"
"Using Randomized Sparsification to Approximate Minimum Cuts"
"Linear time algorithms for finding a sparse k-connected spanning subgraph of a k-connected graph"
"Fully Dynamic Biconnectivity in Graphs"
"Improved Data Structures for Fully Dynamic Biconnectivity in Graphs"
"A data structure for dynamic trees"
"On Finding and Updating Spanning Trees and Shortest Paths"
--TR
A data structure for dynamic trees
Amortized analysis of algorithms for set union with backtracking
Maintenance of a minimum spanning forest in a dynamic plane graph
Fully dynamic algorithms for 2-edge connectivity
Complexity models for incremental computation
Separator based sparsification I.
Ambivalent Data Structures for Dynamic 2-Edge-Connectivity and <i>k</i> Smallest Spanning Trees
SparsificationMYAMPERSANDmdash;a technique for speeding up dynamic graph algorithms
Poly-logarithmic deterministic fully-dynamic algorithms for connectivity, minimum spanning tree, 2-edge, and biconnectivity
Sampling to provide or to bound
An On-Line Edge-Deletion Problem
Improved Data Structures for Fully Dynamic Biconnectivity
Certificates and Fast Algorithms for Biconnectivity in Fully-Dynamic Graphs
--CTR
Mihai Ptracu , Erik D. Demaine, Lower bounds for dynamic connectivity, Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, June 13-16, 2004, Chicago, IL, USA
David Eppstein, Dynamic generators of topologically embedded graphs, Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms, January 12-14, 2003, Baltimore, Maryland
Glencora Borradaile , Philip Klein, An O (n log n) algorithm for maximum st-flow in a directed planar graph, Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm, p.524-533, January 22-26, 2006, Miami, Florida
Robert E. Tarjan , Renato F. Werneck, Self-adjusting top trees, Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, January 23-25, 2005, Vancouver, British Columbia
Umut A. Acar , Guy E. Blelloch , Robert Harper , Jorge L. Vittes , Shan Leung Maverick Woo, Dynamizing static algorithms, with applications to dynamic trees and history independence, Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms, January 11-14, 2004, New Orleans, Louisiana
Camil Demetrescu , Giuseppe F. Italiano, Algorithmic Techniques for Maintaining Shortest Routes in Dynamic Networks, Electronic Notes in Theoretical Computer Science (ENTCS), v.171 n.1, p.3-15, April, 2007
David R. Karger, Minimum cuts in near-linear time, Journal of the ACM (JACM), v.47 n.1, p.46-76, Jan. 2000
Timothy M. Chan, Dynamic subgraph connectivity with geometric applications, Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, May 19-21, 2002, Montreal, Quebec, Canada
Jacob Holm , Kristian de Lichtenberg , Mikkel Thorup, Poly-logarithmic deterministic fully-dynamic algorithms for connectivity, minimum spanning tree, 2-edge, and biconnectivity, Journal of the ACM (JACM), v.48 n.4, p.723-760, July 2001
Mikkel Thorup, Worst-case update times for fully-dynamic all-pairs shortest paths, Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, May 22-24, 2005, Baltimore, MD, USA
Eran Eyal , Dan Halperin, Dynamic maintenance of molecular surfaces under conformational changes, Proceedings of the twenty-first annual symposium on Computational geometry, June 06-08, 2005, Pisa, Italy
Camil Demetrescu , Giuseppe F. Italiano, Fully dynamic all pairs shortest paths with real edge weights, Journal of Computer and System Sciences, v.72 n.5, p.813-837, August 2006 | dynamic graph algorithms;connectivity |
320249 | Optimization of queries with user-defined predicates. | Relational databases provide the ability to store user-defined functions and predicates which can be invoked in SQL queries. When evaluation of a user-defined predicate is relatively expensive, the traditional method of evaluating predicates as early as possible is no longer a sound heuristic. There are two previous approaches for optimizing such queries. However, neither is able to guarantee the optimal plan over the desired execution space. We present efficient techniques that are able to guarantee the choice of an optimal plan over the desired execution space. The optimization algorithm with complete rank-ordering improves upon the naive optimization algorithm by exploiting the nature of the cost formulas for join methods and is polynomial in the number of user-defined predicates (for a given number of relations.) We also propose pruning rules that significantly reduce the cost of searching the execution space for both the naive algorithm as well as for the optimization algorithm with complete rank-ordering, without compromising optimality. We also propose a conservative local heuristic that is simpler and has low optimization overhead. Although it is not always guaranteed to find the optimal plans, it produces close to optimal plans in most cases. We discuss how, depending on application requirements, to determine the algorithm of choice. It should be emphasized that our optimization algorithms handle user-defined selections as well as user-defined join predicates uniformly. We present complexity analysis and experimental comparison of the algorithms. | Introduction
In order to efficiently execute complex database appli-
cations, many major relational database vendors provide
the ability to define and store user-defined func-
tions. Such functions can be invoked in SQL queries
and make it easier for developers to implement their
applications. However, such extensions make the task
of the execution engine and optimizer more challeng-
ing. In particular, when user-defined functions are used
in the Where clause of SQL, such predicates cannot be
treated as SQL built-in predicates. If the evaluation of
such a predicate involves a substantial CPU and I/O
cost, then the traditional heuristic of evaluating a predicate
as early as possible may result in a significantly
suboptimal plan. We will refer to such predicates as
user-defined (or, expensive) predicates.
Consider the problem of identifying potential customers
for a mail-order distribution. The mail-order
company wants to ensure that the customer has a high
credit rating, is in the age-group 30 to 40, resides in
the San Francisco bay area, and has purchased at least
$1,000 worth of goods in the last year. Such a query
involves a join between the Person and the Sales relation
and has two user-defined functions zone and
high credit rating.
Select name, street-address, zip
From Person, Sales
Where high-credit-rating(ss-no)
and age In [30,40]
and "bay area"
and Person.name = Sales.buyer-name
Group By name, street-address, zip
Having Sum(sales.amount) ? 1000
Let us assume that the predicate high credit rating
is expensive. In such a case, we may evaluate the
predicate after the join so that fewer tuples invoke the
above expensive predicate. However, if the predicate
is very selective, then it may still be better to execute
high credit rating so that the cost of the join is re-
duced. Such queries involving user-defined predicates
occur in many applications, e.g., GIS and multi-media.
This paper shows how commercial optimizers, many
of which are based on system R style dynamic programming
algorithm [SAC + 79], can be extended easily to be
able to optimize queries with user-defined predicates.
We propose an easy extension of the traditional optimizer
that is efficient and that guarantees the optimal.
We associate a per tuple cost of evaluation and a selectivity
with every user-defined predicate (as in [HS93]).
While the task of optimizing queries with user-defined
predicates is important, there are other interesting directions
of research in user-defined predicates, e.g., use
of semantic knowledge, e.g., [PHH92, CS93].
As pointed out earlier, the traditional heuristic of
evaluating predicates as early as possible is inappropriate
in the context of queries with user-defined predi-
cates. There are two known approaches to optimizing
queries that treat user-defined predicates in a special
way. The first technique, used in LDL [CGK89] is exponential
in the number of expensive predicates and
it fails to consider the class of traditional plans where
user-defined predicates are evaluated as early as possi-
ble. The second technique, known as Predicate Migration
[HS93] is polynomial in the number of expensive
predicates and takes into consideration the traditional
execution space as well. However, this algorithm cannot
guarantee finding the optimal plan. Moreover, in
the worst case, it may need to exhaustively enumerate
the space of joins (O(n!) in the number of joins n in
the query).
Our algorithm finds the optimal plan without ever
requiring to do an exhaustive enumeration of the space
of join orderings. The complexity of the algorithm is
polynomial in the number of user-defined functions 1 .
Our approach does not require any special assumptions
about the execution engine and the cost model. In
designing this optimization algorithm, we discovered a
powerful pruning technique (pushdown rule) that has
broader implication in other optimization problems as
well [CS96].
Although the optimization algorithm that guarantees
the optimal has satisfactory performance for a
large class of queries, its complexity grows with the
increasing query size. Therefore, we wanted to investigate
if simpler heuristics can be used as an alterna-
tive. The conservative local heuristic that we present
guarantees optimality in several cases and experimental
results show that it chooses an execution plan very
1 The complexity is exponential in the number of joins. This
is not unexpected since the traditional join optimization problem
itself is NP-hard.
close to the optimal while being computationally in-
expensive. Thus, this heuristic serves as an excellent
alternative where query size or complexity of the optimization
algorithm is a concern.
We have implemented the optimization algorithm as
well as the heuristic by extending a System-R style op-
timizer. We present experimental results that illustrate
the characteristics of the optimization algorithms proposed
in this paper.
The rest of the paper is organized as follows. In the
next section, we review the System R optimization algorithm
which is the basis of many commercial
optimizers. Next, we describe the desired execution
space and review the past work on optimizing queries
with user-defined predicates. Sections 4 and 5 describe
the optimization algorithm and the conservative local
heuristic respectively. The performance results and implementation
details are given in Section 6.
2 System R Dynamic Programming Algorith
Many commercial database management systems have
adopted the framework of the System R optimizer
which uses a dynamic programming
algorithm. The execution of a query is represented
syntactically as an annotated join tree where the internal
node is a join operation and each leaf node is
a base relation. The annotations provide the details
such as selection predicates, the choice of access paths,
join algorithms and projection attributes of the result
relation. The set of all annotated join trees for a query
that is considered by the optimizer will be called the
execution space of the query. A cost function is used to
determine the cost of a plan in the execution space and
the task of the optimizer is to choose a plan of minimal
cost from the execution space. Most optimizers of
commercial database systems restrict search to only a
subset of the space of join ordering. Most optimizers
of commercial database systems restrict search to only
a subset of the space of join ordering. In many opti-
mizers, the execution space is restricted to have only
linear join trees, whose internal nodes have at least
one of its two child nodes as a leaf (base relation). In
other words, a join with N relations is considered as
a linear sequence of 2-way joins. For each intermediate
relation, the cardinality of the result size and other
statistical parameters are estimated.
Figure
1 (adopted from [GHK92]) illustrates the System
R dynamic programming algorithm that finds an
optimal plan in the space of linear (left-deep) join
trees 79]. The input for this algorithm is a select-
project-join (SPJ) query on relations R 1 ,.,R n . The
procedure DP Algorithm:
for to n do f
for all S ' fR 1 ; :::; Rng s.t.
bestPlan := a dummy plan with infinite cost
for all R j
if cost(p) - cost(bestPlan)
bestPlan := p
Figure
1: System R Algorithm for Linear Join Trees
function joinPlan(p,R) extends the plan p into another
plan that is the result of p being joined with the base relation
R in the best possible way. The function cost(p)
returns the cost of the plan p. Optimal plans for sub-sets
are stored in the optPlan() array and are reused
rather than recomputed.
The above algorithm does not expose two important
details of the System R optimization algorithm. First,
the algorithm uses heuristics to restrict the search
space. In particular, all selection conditions and secondary
join predicates are evaluated as early as pos-
sible. Therefore, all selections on relations are evaluated
before any join is evaluated. Next, the algorithm
also considers interesting orders. Consider a plan P for
uses sort-merge join and costs more than
another plan P 0 that uses hash-join. Nonetheless, P
may still be the optimal plan if the sort-order used in
P can be reused in a subsequent join. Thus, the System
R algorithm saves not a single plan, but multiple optimal
plans for every subset S in the Figure, one for each
distinct such order, termed interesting order [SAC
Thus, a generous upper bound on the number of plans
that must be optimized for a query with joins among
n tables is O(2 n ) (the number of subsets of n tables)
times the number of interesting orders.
3 Execution Space and Previous Ap-
proaches
As mentioned earlier, for traditional SPJ queries, many
optimizers find an optimal from the space of linear
join orderings only. When user-defined predicates are
present, the natural extension to this execution space
consists of considering linear sequence of joins, and allowing
an expensive predicate to be placed following
any number of (including zero) joins. Thus, an expensive
selection condition can be placed either immediately
following the scan of the relation on which it ap-
plies, or after any number of joins following the scan.
Likewise, an expensive secondary join predicate can be
placed either immediately after it becomes evaluable
(following the necessary joins), or after any number of
subsequent joins. In other words, this execution space
restricts the join ordering to be linear but allows expensive
predicates to be freely interleaved wherever they
are evaluable. We refer to this execution space as unconstrained
linear join trees. This is the same execution
space that is studied in [HS93, Hel94]. In this
section, we discuss two approaches that have been studied
in the past for optimizing queries with user-defined
predicates.
3.1 Approach
In this approach, an expensive predicate is treated
as a relation from the point of view of optimiza-
tion. This approach was first used in the LDL project
at MCC [CGK89] and subsequently at the Papyrus
project at HP Laboratories [CS93]. Viewing expensive
predicates as relations has the advantage that the
System-R style dynamic programming algorithm can
be used for enumerating joins as well as expensive pred-
icates. Thus, if e is an expensive predicate and R 1 and
R 2 are two relations, then the extended join enumeration
algorithm will treat the optimization problem as
that of ordering R 1 , R 2 and e using the dynamic programming
algorithm.
Shortcoming of the
This approach suffers from two drawbacks both of
which stem from the problem of over-generalizing and
viewing an expensive predicate as a relation. First, the
optimization algorithm is exponential not only in the
number of relations but also in the number of expensive
predicates. Let us consider the case where only
linear join trees are considered for execution. Thus, in
order to optimize a query that consists of a join of n
relations and k expensive predicates, the dynamic programming
algorithm will need to construct O(2 n+k ) optimal
subplans. In other words, the cost of optimizing
a relation with n relations and k expensive predicates
will be as high as that of optimizing (n+k) relations.
Another important drawback of this approach is that
if we restrict ourselves to search only linear join trees,
then the algorithm cannot be used to consider all plans
in the space of unconstrained linear trees. In particu-
lar, the algorithm fails to consider plans that evaluate
expensive predicates on both operands of a join prior
to taking the join [Hel94]. For example, assume that
R 1 and R 2 are two relations with expensive relations
e 1 and e 2 defined on them. Since the LDL algorithm
treats expensive predicates and relations alike, it will
only consider linear join sequences of joins and selec-
tions. However, the plan which applies e 1 on R 1 and
e 2 on R 2 and then takes the join between the relations
R 1 and R 2 , is not a linear sequence of selections and
joins. Thus, this algorithm may produce plans that
are significantly worse than plans produced by even
the traditional optimization algorithm.
3.2 Predicate Migration
Predicate Migration algorithm improves on the LDL
approach in two important ways. First, it considers
the space of unconstrained linear trees for finding a
plan, i.e., considers pushing down selections on both
operands of a join. Next, the algorithm is polynomial
in the number of user defined predicates. However,
the algorithm takes a step backwards from the LDL
approach in other respects. This will be discussed later
in this section.
We will discuss two aspects of this approach. First,
we will discuss the predicate migration algorithm,
which given a join tree, chooses a way of interleaving
the join and the selection predicates. Next, we will describe
how predicate migration may be integrated with
a System R style optimizer [HS93, Hel94].
The predicate migration algorithm takes as input a
join tree, annotated with a given join method for each
join node and access method for every scan node, and
a set of expensive predicates. The algorithm places the
expensive predicates in their "optimal" (see discussion
about the shortcomings) position relative to the join
nodes. The algorithm assumes that join costs are linear
in the sizes of the operands. This allows them to
assign a rank for each of the join predicate in addition
to assigning ranks for expensive predicates. The notion
of rank has been studied previously in [MS79, KBZ86].
Having assigned ranks, the algorithm iterates over each
stream, where a stream is a path from a leaf to a root
in the execution tree. Every iteration potentially rearranges
the placement of the expensive selections. The
iteration continues over the streams until the modified
operator tree changes no more. It is shown in [HS93]
that convergence occurs in a polynomial number of
steps in the number of joins and expensive predicates.
The next part of this optimization technique concerns
integration with the System R style optimizer.
The steps of the dynamic programming algorithm are
followed and the optimal plan for each subexpression
is generated with the following change. At each
join step, the option of evaluating predicates (if ap-
plicable) is considered: Let P be the optimal plan of
oe e (R 1 S) and P 0 be the optimal plan for oe e (R) 1 S.
then the algorithm prunes the
plan P 0 without compromising the optimal. However,
if the plan for P 0 is cheaper, then dynamic programming
cannot be used to extend the plan P 0 . Rather, the
plan P 0 is marked as unprunable. Subsequently, when
constructing larger subplans, the algorithm ignores the
unprunable plans. After the dynamic programming algorithm
terminates, each such unprunable plans needs
to be extended through exhaustive enumeration, i.e.,
all possible ways of extending each unprunable plan
are considered.
Shortcomings of the
This approach to optimization has three serious drawbacks
that limit its applicability. First, the algorithm
requires that cost formulas of join to be linear in the
sizes of the inputs. Next, the algorithm cannot guarantee
an optimal plan even if a linear cost model is
used. This is because the use of predicate migration
algorithm may force estimations to be inaccurate. In
a nutshell, predicate migration requires a join predicate
to be assigned a rank , which depends on the cost
of the join and the latter is a function of the input
sizes of the relations. Unfortunately, the input sizes
for the join depends on whether the expensive predicates
have been evaluated! This cyclic dependency
forces predicate migration to make an ad-hoc choice in
calculating the rank. During this step, the algorithm
potentially underestimates the join cost by assuming
that all expensive predicates have been pushed down.
This ad-hoc assumption sacrifices the guarantee of the
optimality (See Section 5.2 of [Hel94] for a detailed dis-
cussion). Finally, the global nature of predicate migration
hinders integration with a System R style dynamic
programming algorithm. The algorithm may degenerate
into exhaustive enumeration. Let us consider a
query that has n relations and a single designated expensive
predicate e on the relation R 1 . Let us assume
that for the given database, the traditional plan where
the predicate e is evaluated prior to any join, is the
optimal plan. In such a case, plans for oe e (R
(i 6= 1) will be marked as unprunable. For each of
these plans, there are (n \Gamma 2)! distinct join orderings
and for each of these join orderings, there can be a
number of join methods. Thus, in the worst case, the
optimization process requires exhaustive enumeration
of the join space.
4 Dynamic Programming Based Optimization
Algorithms
Our discussion of the previous section shows that none
of the known approaches are guaranteed to find an optimal
plan over the space of unconstrained linear join
trees. In this section, we present our optimization algorithm
which is guaranteed to produce an optimal plan
over the above execution space. To the best of our
knowledge, this is the first algorithm that provides such
a guarantee of optimality. The techniques presented in
this section are readily adaptable for other join execution
spaces as well (e.g., bushy join trees) [CS96].
Our algorithm has the following important properties
as well: (1) it is remarkably robust . It is free from
special restrictions on cost model or requirements for
caching (2) The algorithm integrates well with dynamic
programming based algorithm used in commercial op-
timizers, and never requires exhaustive enumeration.
(3) The algorithm is polynomial in the number of user-defined
predicates. We provide a succinct characterization
of what makes this optimization problem polynomial
and the parameters that determine its complexity
of optimization.
Thus, our algorithm successfully addresses the short-comings
of the Predicate migration algorithm without
sacrificing the benefit of considering the execution
space of unconstrained linear join trees and ensuring
that the complexity of optimization grows only polynomially
with the increasing number of user-defined
functions.
For notational convenience, we will indicate ordering
of the operators in a plan by nested algebraic expres-
sions. For example, (oe e (R
a plan where we first apply selection e on relation
that relation with R 2 before joining
it with the relation R 3 , which has been reduced
by application of a selection condition e 0 . In describing
the rest of this section, we make the following two
assumptions: (a) all user-defined predicates are selec-
tions. This assumption is to simplify the presentation.
Our algorithms accommodate user-defined join predicates
as well and preserves the guarantee of optimality
as well as properties (1)-(3) above [CS96] (b) no traditional
interesting orders are present. This assumption
is for ease of exposition only.
We begin by presenting the "naive" optimization algorithm
that guarantees optimality and has properties
(1) through (3) above. Next, we present two powerful
pruning techniques that significantly enhance the efficiency
of the optimization algorithm, as will be shown
later in the experimental section.
4.1 Naive Optimization Algorithm
The enumeration technique of our algorithm relies on
clever use of the following two key observations:
Equivalent Plan Pruning Rule: The strength of the
traditional join enumeration lies in being able to compare
the costs of different plans that represent the same
subexpression but evaluated in different orders. Since
selection and join operations may be commuted, we
can extend the same technique to compare and prune
plans for queries that have the same expensive predicates
and joins, i.e., if P and P 0 are two plans that
represent the same select-project-join queries with the
same physical properties, and if
then P may be pruned. For example, we can compare
the costs of the plans P and P 0 where P is the
plan (oe e (R is the plan
(R (R 1 ).
Selection Ordering: Let us consider conjunction of a set
of expensive selection predicates applied on a relation.
The problem of ordering the evaluation of these predicates
is the selection ordering problem. The complexity
of selection ordering is very different from that of ordering
joins among a set of relations. It is well-known
that for traditional cost models, the latter problem is
NP-hard. On the other hand, the selection ordering
problem can be solved in polynomial time. Further-
more, the ordering of the selections does not depend
on the size of the relation on which they apply. The
problem of selection ordering was addressed in [HS93]
(cf. [KBZ86, MS79, WK90]). It utilizes the notion of
a rank. The rank of a predicate is the ratio
where c is its cost per tuple and s is its selectivity.
Theorem 4.1: Consider the query oe e (R) where
. The optimal ordering of the predicates in
e is in the order of ascending ranks and is independent
of the size of R.
For example, consider two predicates e and e 0 with
selectivities :2 and :6 and costs 100 and 25. Although
the predicate e is more selective, its rank is 125 and
the rank of e 0 is 62:5. Therefore evaluation of e 0 should
precede that of e. The above technique of selection
ordering can be extended to broader classes of boolean
expressions [KMS92].
Ensuring Complete Enumeration Efficiently
Equivalent plan pruning rule allows us to compare two
plans that represent the same expression. This observation
will help us integrate well with the System
R algorithm and avoid exhaustive enumeration (unlike
predicate migration). On the other hand, selection ordering
tells us that (in contrast to the LDL algorithm),
we can treat selections unlike relations to make enumeration
efficient. Indeed, this observation is what makes
our algorithm polynomial in the number of user-defined
predicates. Therefore, the challenge is to treat selections
differently from joins while enumerating but to
be still able to compare costs of two plans when they
represent the same expression. In order to achieve this
goal, we exploit the well-known idea of interesting orders
in a novel way.
We keep multiple plans that represent the join of the
same set of relations but differ in the sets of predicates
that have been evaluated. In other words, with every
join plan, an additional "tag" is placed, which records
the set of predicates that have been evaluated in the
plan. Thus, a tag acts very much like an interesting order
from the point of view of join enumeration. This is
a useful way of thinking about enumerating the space
of execution plans since the selection ordering rule ensures
that we need "a few" tags. Notice that whenever
two plans represent the join of same set of relations and
agree on the tags, they can be compared and pruned.
Figure
2 illustrates the execution plans (and sub-
plans) that need to be considered when there are three
relations and two expensive selection predicates e 1 and
e 2 on R 1 . P 1 , P 2 and P 3 are possible plans for R 1
(each with differing tags). The plans from P 5 to P 13
are for R 1 We will distinguish between
since they will have different tags, but will
keep a single plan among P 5 , P 7 . We now
formalize the above idea.
Tags
Let us consider a join step where we join two relations
R 1 and R 2 . Let us assume that (p are the
predicates applicable on R 1 , in the order of the increasing
rank. From the selection ordering criterion, we conclude
that if the predicate p j is applied on R 1 prior to
the join, then so must all the predicates p 1 In
other words, there can be at most (m possibilities
for pushing down selections on R 1 : (a) not applying
any predicate at all (b) applying the first j predicates
only where j is between 1 and m. Likewise, if the predicates
applicable on R 2 are (q are
at most (s+1) possibilities. Thus, altogether there can
be (m plans for the join between R 1 and
R 2 that differ in the applications of selections prior to
the join. We can denote these plans by P
where P r;t designates the plan that results from evaluating
to the join. The selection ordering plays a crucial role
in reducing the number of tags from exponential to
a polynomial in the number of user defined predicates.
Observe that if we cannot have a linear ordering among
selections, then we have to consider cases where any
subset of the selection predicates are chosen for evaluation
prior to the join. In that case, in the above join
between R 1 and R 2 , the number of plans can be as
many as 2 m+1 :2 s+1 .
We generalize the above idea in a straight-forward
fashion. For a subquery consisting of the join among
relations fR there will be at most (m 1
plans that need
to be kept where m j represents the number of expensive
predicates that apply on R ij . We will associate a
distinct tag with each of these plans over the same sub-
query. We now sketch how tags may be represented.
We assign a number to each expensive predicate according
to the ordering by rank. If an user-defined
selection condition is present over w of the relations
(say, R in the query, then with each plan,
we associate a vector of width w. If ! a
the tag vector with a plan P , then it designates that
all expensive predicates of rank lower or equal to a j on
R uj have been evaluated in P for all 1 - j - w. We
defer a full discussion of the scheme for tagging to the
extended version of our forthcoming report [CS96], but
illustrate the scheme with the following example.
Example 4.2 : Consider a query that represents a
join among four relations R selections
where the selections are numbered by their relative increasing
rank. The relation R 1 has three predicates
numbered 2,5,6. Let R 2 have three predicates 1,3,4.
Let R 3 have predicates 7,8,9. The relation R 4 has no
predicates defined. The tag vector has three positions,
where the ith position represents predicates on the relation
R i . There are altogether 16 plans for join over
each with a distinct tag. Consider the plan
for the tag vector ! 5; 4; 0 ?. This plan can be joined
with the relation R 3 . Depending on the selection predicates
evaluated prior to the join, there will be altogether
8 plans with different tag vectors that extend
the above plan. In particular, a plan will be generated
with a tag vector ! 5; 4; 8 ?. This plan can be compared
with the plan obtained by extending a plan for
with the tag vector ! 0; 4; 8 ? through a join
with R 1 and evaluating predicates 2 and 5 on R 1 prior
to the join.
In the above example, we illustrated how we can prune
plans with the same tag vector and over the same set
of relations. This is unlike the approaches in [HS93,
Hel94] where once a user-defined predicate has been
"pushed-down", the plan is unprunable.
Algorithm: The extensions needed to the algorithm
in
Figure
1 for the naive optimization algorithm are
straightforward. There is no longer a single optimal
plan for S j (in Figure 1), but there may be multiple
plans, one for each tag vector. Thus, we will need to
iterate over the set of possible tags. For each such optimal
plan S t
j with a tag t, we consider generating all
(R1
Figure
2: Search Space of Naive Optimization Algorithm
possible legal tags for S. For each such tag t 0 , joinPlan
needs to be invoked to extend the optimal plan S t
. We
need to also ensure that we compare costs of plans that
have the same tag.
4.2 Complexity
In the optimization algorithm that we presented, we exploited
dynamic programming as well as selection or-
dering. The latter makes it possible for us to have
an optimization algorithm which is polynomial in k
whereas the former made it possible for us to retain
the advantage of avoiding exhaustive enumeration of
the join ordering. The efficiency of our algorithm is
enhanced by the applications of pruning rules that will
be described in the next section.
Let us consider a query that consists of a join among
relations and that has k user-defined predicates. Let
us assume that only g of the n relations have one or
more user-defined selection conditions. Furthermore,
let w be the maximum number of expensive predicates
that may apply on one relation. In such cases, the
number of tags can be no more than (1+w) g . Further-
more, we can show that the total number of subplans
that need to be stored has a generous upper-bound of
. Note that since n is the total number
of relations and k is the total number of user-defined
predicates, k. Therefore, the above formula
can be used to derive an upper-bound of (2+k) n .
Hence, for a given n, the upper-bound is a polynomial
in k. The above is a very generous upper bound and
a more detailed analysis will be presented in [CS96].
Observe that as in the case of traditional join enumer-
ation, the complexity is exponential in n.
The analysis of our complexity shows that the complexity
is sensitive to the distribution of predicates
among relations as well as to the number of predicates
that may apply to a single relation. In particular, if
all user-defined predicates apply to the same relation,
then the complexity is O(2 n )(1+k=2), a linear function
of k. The complexity of this algorithm grows with the
number of relations over which user-defined predicates
occur since they increase the number of tags exponen-
tially. In the full paper, we study the effect of varying
distributions of user-defined predicates on efficiency of
the optimization algorithms [CS96].
It is important to recognize how we are able to avoid
the worst cases that predicate migration algorithm en-
counters. Predicate migration algorithm has worst running
time when user-defined predicates turn out to be
relatively inexpensive (i.e., has low rank). It is so since
in such cases, unprunable plans are generated (See Section
3). On the other hand, our optimization algorithm
prepares for all possible sequences of predicate push-down
through the use of tags. Furthermore, since in
many applications, we expect the number of expensive
user-defined functions in the query to be a few and less
than the number of joins, it is important to ensure that
the cost of join enumeration does not increase sharply
due to presence of a few user-defined predicates. How-
ever, as pointed out earlier, even with a single user-defined
predicate over n joins, the worst-case complexity
of predicate migration can be O(n!). Our approach
overcomes the above shortcoming of predicate migration
effectively.
4.3 Efficient Pruning Strategies
The naive algorithm can compare plans that have the
same tags only. In this section, we will augment our
"naive" optimization algorithm with two pruning tech-
niques. The pruning techniques that we propose here
allow us to compare and prune plans that have different
tags. These pruning techniques are sound, i.e., guaranteed
not to compromise the optimality of the chosen
plan.
Pushdown Rule
This rule says that if the cost of evaluating the selections
(prior to the join) together with the cost of the
join after the selections are applied, is less than the cost
of the join without having applied the selections, then
we should push down the selections 2 . For example,
in
Figure
2, if the cost of P 5 is less than the cost of
8 we can prune P 8 . In naive optimization algorithm,
we had to keep both P 5 and P 8 since they had different
tags, i.e., different numbers of expensive predicates
were applied.
be a plan for the join R 1 S.
Let P be a plan that applies an user-defined predicate
e on the relation R before taking the join with S, (i.e.,
may be pruned.
We refer to the above lemma as the pushdown rule.
The soundness of the above lemma follows from the observation
that for SPJ queries with the same interesting
order, the cost is a monotonic function of sizes of relations
[CS96]. A consequence of this rule is that if P 0 is
a plan that has a set S 0 of expensive predicates applied,
then it can be pruned by another plan P over the same
set of relations where (a)
has a set S of expensive predicates applied where S is
a superset of S 0 (therefore, S Given two plans
over the same set of relations, we can easily check (b)
by examining the tag vectors of P and P 0 [CS96]. If
indeed (b) holds, then we say T dominates T 0 , where
are tags of P 0 and P . We can rephrase the
above lemma to conclude the following:
are two plans over the
same set of relations with the tags T and T 0 such that
dominates
may be pruned.
For a given plan P , the set of plans (e.g., P 0 ) that the
above corollary allows us to prune will be denoted by
pushdown expensive(P ).
Strictly speaking, the lemma can be used to compare plans
that have the same interesting order.
us consider the previous exam-
ple. For the plan that represents the join among
there will be altogether 64 tags. How-
ever, if the cost of the plan with the tag ! 6; 4; 9 ? is
lower than that of ! 5; 4; 8 ?, we can use the pushdown
rule to prune the latter plan.
Pullover Rule
This rule says that if locally deferring evaluation of a
predicate leads to a cheaper plan than the plan that
evaluates the user-defined predicate before the join,
then we can defer the evaluation of the predicate without
compromising the optimal. The soundness of this
rule uses the dynamic programming nature of the System
R algorithm and can be established by an inductive
argument. For example, if the cost of the plan extending
P 6 with evaluation of e 2 (i.e., oe e2 (oe e1 R 1
less than the cost of P 5 in Figure 2, we can prune P 5 .
In naive optimization algorithm, we had to keep both
since they had different tags, i.e., different
number of predicates were applied to each of the plans.
e be a user-defined predicate on a
relation R. Let P and P 0 represent the optimal plans for
oe e (R 1 S) and oe e (R) 1 S respectively. If Cost(P
may be pruned.
We refer to the above as the pullover rule since the
plan P in the lemma corresponds to the case where the
predicate is pulled up. This rule can also be used in the
context of predicate migration to reduce the number of
unprunable plans generated (cf. [HS93]). We can use
the pullover rule for pruning plans as follows. Let us
consider plans P and P 0 over the same set of relations
but with different tags T and T 0 . If the tag T dominates
predicates that are evaluated in T 0 are also
evaluated in T . Let Diff(T; T 0 ) represent the set of
predicates that are evaluated in T but not in T 0 . We
can then use the Pullover rule to obtain the following
corollary. Intuitively, the corollary says that we can
compare cost(P ), with that of cost(P
the cost of evaluating predicates Diff(T; T 0 ) after the
join in P 0 .
Corollary 4.7: Let P and P 0 be two plans with tags T
and T 0 over the same set of relations and T dominates
be the plan obtained by applying the predicates
in Diff(T; T 0 ) to P 0 . If cost(P 00
then P may be pruned.
For a given P , we can construct a set of all such plans
each of which may be used to prune P . We can refer
to the above set as pullover cheaper(P ). The following
example illustrates the corollary. For example, consider
Example 4.5 with the following change: the cost of the
plan P with the tag T =! 6; 4; 9 ? is higher than the
cost of the plan P 0 with the tag T 0 =! 5; 4; 8 ?. Notice
that the tag ! 6; 4; 9 ? dominates the tag ! 5; 4; 8 ?.
The set Diff(T; T 9g. In such a case, the above
lemma allows us to prune the plan P if the cost of the
plan P 0 with the added cost of evaluating the set of
predicates f6; 9g after the join exceeds the cost of P .
4.4 Optimization Algorithm with Pruning
In this section, we augment the naive optimization algorithm
with the pruning strategies. The extended algorithm
is presented in Figure 3. The P lantable data
structure stores all plans that need to be retained for
future steps of the optimizer. For every subset of re-
lations, the data structure stores potentially multiple
plans. The different plans correspond to different tags.
Storing such plans requires a simple extension of the
data structure used to represent plans with interesting
orders in the traditional optimizers.
In determining the access methods and choice of join
methods, the algorithm behaves exactly like the traditional
algorithm in Figure 1. However, when there
are s applicable user-defined predicates on the operand
applicable predicates on the operand R j , the
algorithm iteratively considers all (r possibilities
which corresponds to applying the first u predicates
and the first v predicates on S j and R j respectively
where the predicates are ordered by ranks. This
is the inner loop of the algorithm and is represented by
extjoinP lan. It should be noted that S j is an intermediate
relation and so the first u predicates on S j may
include predicates on multiple relations that have been
joined to form S j .
The choices of u and v uniquely determine the tag
for the plan p in Figure 3. The plan p will be compared
against plans over the same set of relations that have
already been stored. The plan p is pruned and the
iteration steps to the next (u; v) combination if one
of the following two conditions holds: (1) If p is more
expensive than the plan in the P lantable with the same
tag, if any. (2) If the set of plans pullover cheaper(p)
is empty, i.e., the pullover rule cannot be used to prune
p.
Otherwise, the predicate addtotable(p) becomes true
and the plan p is added to P lantable. Next, this new
plan p is used to prune plans that are currently in
plantable. In the algorithm, we have designated this
set of pruned plans by pruneset(p). They may be:
(1) The stored plan with the same tag, if it exists in
the P lantable and is more expensive. (2) The set of
plans in pushdown expensive(p) , i.e., plans that may
be pruned with p using the pushdown rule.
procedure Extended DP-Algorithm:
for to n do f
for all S ' fR 1 ; :::; Rng s.t.
bestPlan := a dummy plan with infinite cost
for all R j
s := Number of evaluable predicates on S j
r := Number of evaluable predicates on R j
for all u := 0 to s do
for all v := 0 to r do
if addtotable(p) then f
remove pruneset(p)
add p to Plantable
for all plan q of fR 1 ; :::; Rng do
complete the plan q
and estimate its cost
return (MinCost(Final))
Figure
3: The Optimization Algorithm with Pruning
for Linear Join Trees
At the end of the final join, we consider all plans
over the relations fR ng. Some of these plans may
need to be completed by adding the step to evaluate
the remainder of the predicates. Finally, the cheapest
among the set of completed plans is chosen.
5 Conservative Local Heuristic
Although the optimization algorithm with novel pruning
techniques guarantees the optimal plan and is computationally
efficient, the conservative local heuristic
that we propose in this section has remarkable qualities
that make it an attractive alternative for implementa-
tion. First, incorporating the heuristic in an existing
System-R style optimizer is easier since tags do not
need to be maintained. Next, incorporating the heuristic
increases the number of subplans that need to be
optimized for a query by no more than a factor of 2
compared to the traditional optimization, independent
of the number of user-defined predicates in the query.
Finally, there are a number of important cases where
the algorithm guarantees generation of an optimal execution
plan.
The simplest heuristics correspond to pushing all expensive
predicates down or deferring evaluation of all
expensive predicates until the last join. These heuristics
do not take into account the costs and selectivities
of the predicates and therefore generate plans of low
quality. Recently, a new heuristic, Pullrank , was proposed
but it was found that the heuristic fails to generate
plans of acceptable quality [Hel94]. We begin by describing
PullRank, characterizing its shortcomings and
then presenting the conservative local heuristic.
Pullrank maintains at most one plan over the same
set of relations. At each join step, for every choice of
the set of predicates that are pushed down, the Pull-
rank algorithm estimates the sum of the costs (we will
call it completion cost) of the following three components
(i) Cost of evaluating expensive predicates that
are pushed down at this step (ii) Cost of the join, taking
into account the selectivities of expensive predicates
that are applied (iii) Cost of evaluating the remainder
of the user-defined functions that are evaluable
before the join but were deferred past the join.
Pullrank chooses the plan that has the minimum completion
cost. Thus, the algorithm greedily pushes down
predicates if the cost of deferring the evaluation of predicates
past the join is more expensive, i.e., if Pullrank
decides that evaluating a predicate u before a join j is
cheaper than evaluating the predicate u immediately
following j, then evaluation of u will precede j in the
final plan, i.e., Pullrank will not consider any plans
where u is evaluated after j. Thus, Pullrank fails to
explore such plans where deferring evaluation of predicates
past more than one joins is significantly better
than choosing to greedily push down predicates based
on local comparison of completion costs.
In order to address the above drawback of Pullrank,
the conservative local heuristic picks one additional
plan (in addition to the plan picked by Pullrank) at
each join step based on sum of the costs of (i) and
(ii) only. Let us refer to this cost metric as pushdown-
join cost. This is the same as assuming that deferred
predicates are evaluated for "free" (i.e., cost component
(iii) is zero). In other words, the plans chosen
using such a metric favor deferring predicates unless
the evaluation of predicates helps reduce the cost of
the current join. Thus, since conservative local heuristic
picks two plans, one for completion cost and the
other for pushdown-join cost, it is possible that the
plan where the predicate u is deferred past j as well as
the plan where u is pushed down prior to j (chosen by
Pullrank), is considered towards the final plan. Thus,
conservative local heuristic can find optimal plans that
Pullrank and other global heuristics fail to find due to
its greedy approach. This is illustrated by the following
example.
Example 5.1 : Consider the query
us assume that the plan oe e (R 1
R 3 ) is optimal. Note that none of the global heuristic
that either pushes down or pulls up all the selections
can find the optimal. If the plan for oe e (R
is cheaper than oe e (R 1 greedily
pushes down P and fails to obtain the optimal. How-
ever, our algorithm uses the plan R 1 in the next
join step to obtain the optimal. This is an example
where a pullup followed by a pushdown was optimal
and therefore only our algorithm was able to find it.
For join of every subset of relations, at most two
plans are stored by conservative local heuristic. There-
fore, we never need to consider optimizing more than
plans. Thus, unlike the algorithm in Figure 3,
the number of subplans that need to be optimized does
not grow with the increasing number of user-defined
predicates. In general, conservative local heuristic may
miss an optimal plan. Intuitively, this is because in
this algorithm, distinctions among the tags are not
made. Nevertheless, the experimental results indicate
that the quality of the plan is very close to the optimal
plan [CS96]. Furthermore, as the following lemma
states, the conservative local heuristic produces an optimal
plan in several important special cases.
Lemma 5.2: The conservative local heuristic produces
an optimal execution plan if any one or more of the following
conditions are true:(1) The query has a single
join. (2) The query has a single user-defined predicate.
(3) The optimal plan corresponds to the case where all
the predicates are pushed down. (4) The optimal corresponds
to the case where all the predicates are deferred
until all the joins are completed.
6 Performance Evaluation
We implemented the optimization algorithms proposed
in this paper by extending a System R style optimizer.
In this section, we present results of doing performance
evaluations on our implementations. In particular, we
(1) The pruning strategies that we proposed improve
the performance of naive optimization algorithm significantly
(2) The plans generated by the traditional optimization
algorithm suffers from poor quality.
(3) The plans generated by PullRank algorithm are
better (less expensive) than the plans generated by
a traditional optimizer, but is still significantly worse
than the optimal.
(4) The conservative local heuristic algorithm reduces
the optimization overhead and it generates plans that
are very close to the optimal.
Number
of
Enumerated
Plans
Number of UDPs for One Relation
Traditional Algorithm
Pull-Rank Algorithm
Conservative Local Heuristic Algorithm
Optimization Algorithm with Pruning
Naive Optimization Algorithm
Relative
Cost
Number of UDPs for One Relation
Traditional Algorithm
Pull-Rank Algorithm
Conservative Local Heuristic Algorithm
Optimization Algorithm with Pruning
6 Join Query
Figure
4: Performance on Varying Number of User-defined Predicates
Experimental Testbed
Experiments ware performed on an IBM RS/6000
workstation with 128 MB of main memory, and running
AIX 3.2.5. We have used an experimental framework
similar to that in [IK90, CS94]. The algorithms were
run on queries consisting of equality joins only. The
queries were tested with a randomly generated relation
catalog where relation cardinalities ranged from 1000 to
100000 tuples, and the numbers of unique values in join
columns varied from 10% to 100% of the corresponding
relation cardinality. The selectivity of expensive predicates
were randomly chosen from 0.0001 to 1.0 and the
cost per tuple of expensive predicates was represented
by the number of I/O (page) accesses and was selected
randomly from 1 to 1000. Each query was generated to
have two projection attributes. Each page of a relation
was assumed to contain 32 tuples. Each relation had
four attributes, and was clustered on one of them. If a
relation was not physically sorted on the clustered at-
tribute, there was a B + -tree or hashing primary index
on that attribute. These three alternatives were equally
likely. For each of the other attributes, the probability
that it had a secondary index was 1/2, and the choice
between a -tree and hashing secondary index were
again uniformly random. We considered block nested-
loops, merge-scan, and simple and hybrid hash joins.
The interesting orders are considered for storing sub-
plans. In our experiment, only the cost for number of
I/O (page) accesses was accounted.
We performed two sets of experiments. In the first
set, we varied the number of user-defined predicates
that apply on one relation. In the second set, we varied
the distribution of the user-defined predicates on
multiple relations in the query. Due to lack of space,
we present only the experiments where the number of
user-defined selections that apply on a relation are var-
ied. The results of the other experiments will be discussed
in [CS96]. The second set of experiments shed
light on how the distribution of the user-defined predicates
among relations in the query influences the cost
of optimization. The results also shows how our conservative
local heuristic sharply reduces the overhead
of optimization under varying distributions.
Effect of Number of User defined Predicates
Due to the lack of space, we will show the results for
6-join (i.e. join among 7 relations) queries only but similar
results were obtained for other queries (e.g. 4-join
and 10-join queries) as well. The detailed performance
study with various queries will be presented in [CS96].
In this experiment, one relation in the query was chosen
randomly and the number of expensive predicates
applicable was varied from 1 to 6. The results presented
here for each data point represents averages of
100 queries, generated randomly.
We experimented how the optimization algorithms
behave as we increase the number of expensive predicates
for the randomly selected relation in the queries.
Figure
4 shows the number of enumerated plans and
the quality of plans generated by each algorithm. A
comparison of the performances of the naive optimization
algorithm and optimization algorithm with pruning
shows that our proposed pruning techniques are extremely
effective. Note that both these algorithms are
guaranteed to be optimal. Over all queries, the naive
optimization algorithm enumerated about 3 times more
plans than optimization algorithm with pruning.
The result on quality of plans shows the relative cost
of plans generated by each algorithms. The cost of plan
generated by optimization algorithm with pruning was
scaled as 1.0. Since naive optimization algorithm and
optimization algorithm with pruning always generate
optimal plans, 1.0 represents the cost of both optimal
plans. The figure illustrates that the quality of plan
generated by traditional optimizer suffers significantly
while the quality of plan generated by PullRank algorithm
gets worse as the number of expensive predicates
increases.
Conservative local heuristic chooses plans that are
identical to or very close to the optimal 3 . This is illustrated
by the fact that the graphs for the heuristic
and the optimization algorithm are practically indistin-
guishable. Although in this experiment, conservative
local heuristic doesn't reduce the number of enumerated
plans significantly compared to the optimization
algorithm with pruning, this observation does not extend
in general, particularly when the user-defined selections
are distributed among multiple relations In the
latter cases, the conservative local heuristic proves to
be the algorithm of choice, since it continues to choose
plans close to the optimal plan with much less optimization
overhead [CS96].
Acknowledgement
: We are indebted to Joe Hellerstein
for giving us detailed feedback on our draft in a
short time. The anonymous referees provided us with
insightful comments that helped improve the draft.
Thanks are due to Umesh Dayal, Nita Goyal, Luis
Gravano and Ravi Krishnamurthy for their help and
comments. Without the support of Debjani Chaudhuri
and Yesook Shim, it would have been impossible
to complete this work.
--R
Query optimization in the presence of foreign functions.
Including group-by in query optimization
Optimization with user-defined predicates
Query optimization for parallel execution.
Predicate migration place- ment
Predicate migration: Optimization queries with expensive predicates.
Randomized algorithms for optimizing large join queries.
Optimization of nonrecursive queries.
Optimizing boolean expressions in object-bases
Sequencing with series-parallel precedence constraints
Extensible/rule based query optimization in starburst.
Query optimization in a memory-resident domain relational calculus database system
--TR
Join processing in database systems with large main memories
The EXODUS optimizer generator
Query optimization in a memory-resident domain relational calculus database system
Introduction to algorithms
Towards an open architecture for LDL
Randomized algorithms for optimizing large join queries
Query optimization for parallel execution
migration
Practical predicate placement
Advanced query optimization techniques for relational database systems
Optimization and execution techniques for queries with expensive methods
Access path selection in a relational database management system
Optimization of Nonrecursive Queries
Implementing an Interpreter for Functional Rules in a Query Optimizer
Optimizing Boolean Expressions in Object-Bases
Query Optimization in the Presence of Foreign Functions
Including Group-By in Query Optimization
Optimization of Queries with User-defined Predicates
The Volcano Optimizer Generator
--CTR
Utkarsh Srivastava , Kamesh Munagala , Jennifer Widom, Operator placement for in-network stream query processing, Proceedings of the twenty-fourth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, June 13-15, 2005, Baltimore, Maryland
Ihab F. Ilyas , Jun Rao , Guy Lohman , Dengfeng Gao , Eileen Lin, Estimating compilation time of a query optimizer, Proceedings of the ACM SIGMOD international conference on Management of data, June 09-12, 2003, San Diego, California
David Taniar , Hui Yee Khaw , Haorianto Cokrowijoyo Tjioe , Eric Pardede, The use of Hints in SQL-Nested query optimization, Information Sciences: an International Journal, v.177 n.12, p.2493-2521, June, 2007
Iosif Lazaridis , Sharad Mehrotra, Optimization of multi-version expensive predicates, Proceedings of the 2007 ACM SIGMOD international conference on Management of data, June 11-14, 2007, Beijing, China
Kamesh Munagala , Utkarsh Srivastava , Jennifer Widom, Optimization of continuous queries with shared expensive filters, Proceedings of the twenty-sixth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, June 11-13, 2007, Beijing, China
Chang-Won Park , Jun-Ki Min , Chin-Wan Chung, Structural function inlining technique for structurally recursive XML queries, Proceedings of the 28th international conference on Very Large Data Bases, p.83-94, August 20-23, 2002, Hong Kong, China
Shivnath Babu , Rajeev Motwani , Kamesh Munagala , Itaru Nishizawa , Jennifer Widom, Adaptive ordering of pipelined stream filters, Proceedings of the 2004 ACM SIGMOD international conference on Management of data, June 13-18, 2004, Paris, France
Surajit Chaudhuri , Prasanna Ganesan , Sunita Sarawagi, Factorizing complex predicates in queries to exploit indexes, Proceedings of the ACM SIGMOD international conference on Management of data, June 09-12, 2003, San Diego, California
Zhiyuan Chen , Johannes Gehrke , Flip Korn, Query optimization in compressed database systems, ACM SIGMOD Record, v.30 n.2, p.271-282, June 2001
Utkarsh Srivastava , Kamesh Munagala , Jennifer Widom , Rajeev Motwani, Query optimization over web services, Proceedings of the 32nd international conference on Very large data bases, September 12-15, 2006, Seoul, Korea
Zhen Liu , Srinivasan Parthasarthy , Anand Ranganathan , Hao Yang, Scalable event matching for overlapping subscriptions in pub/sub systems, Proceedings of the 2007 inaugural international conference on Distributed event-based systems, June 20-22, 2007, Toronto, Ontario, Canada
Caetano Traina, Jr. , Agma J. M. Traina , Marcos R. Vieira , Adriano S. Arantes , Christos Faloutsos, Efficient processing of complex similarity queries in RDBMS through query rewriting, Proceedings of the 15th ACM international conference on Information and knowledge management, November 06-11, 2006, Arlington, Virginia, USA
Fosca Giannotti , Giuseppe Manco , Franco Turini, Specifying Mining Algorithms with Iterative User-Defined Aggregates, IEEE Transactions on Knowledge and Data Engineering, v.16 n.10, p.1232-1246, October 2004
Jayaprakash Pisharath , Alok Choudhary , Mahmut Kandemir, Energy management schemes for memory-resident database systems, Proceedings of the thirteenth ACM international conference on Information and knowledge management, November 08-13, 2004, Washington, D.C., USA
Surajit Chaudhuri , Luis Gravano , Amelie Marian, Optimizing Top-k Selection Queries over Multimedia Repositories, IEEE Transactions on Knowledge and Data Engineering, v.16 n.8, p.992-1009, August 2004
Zhen He , Byung Suk Lee , Robert Snapp, Self-tuning cost modeling of user-defined functions in an object-relational DBMS, ACM Transactions on Database Systems (TODS), v.30 n.3, p.812-853, September 2005
Brian Babcock , Shivnath Babu , Mayur Datar , Rajeev Motwani , Dilys Thomas, Operator scheduling in data stream systems, The VLDB Journal The International Journal on Very Large Data Bases, v.13 n.4, p.333-353, December 2004
Panagiotis G. Ipeirotis , Eugene Agichtein , Pranay Jain , Luis Gravano, To search or to crawl?: towards a query optimizer for text-centric tasks, Proceedings of the 2006 ACM SIGMOD international conference on Management of data, June 27-29, 2006, Chicago, IL, USA
Srinath Shankar , Ameet Kini , David J. DeWitt , Jeffrey Naughton, Integrating databases and workflow systems, ACM SIGMOD Record, v.34 n.3, September 2005 | query optimization;user-defined predicates;dynamic programming |
320310 | A distributed algorithm for graphic objects replication in real-time group editors. | Real-time collaborative editing systems are groupware systems that allow multiple users to edit the same document at the same time from multiple sites. A specific type of collaborative editing system is the object-based collaborative graphics editing system. One of the major challenge in building such systems is to solve the concurrency control problems. This paper addresses the concurrency control problem of how to preserve the intentions of concurrently generated operations whose effects are conflicting. An object replication strategy is proposed to preserve the intentions of all operations. The effects of conflicting operations are applied to different replicas of the same object, while non-conflicting operations are applied to the same object. An object identification scheme is proposed to uniquely and consistently identify non-replicated and replicated objects. Lastly, an object replication algorithm is proposed to produce consistent replication effects at all sites. | Introduction
People collaborate to solve problems which would otherwise
be difficult or impossible for individuals. However,
group work can become unproductive and expensive [8]. For
these reasons, many researchers have been conducting studies
on how to effectively support group work [3]. Computer-Supported
Cooperative Work (CSCW) [2, 4] or computer-based
groupware systems assist groups of people working simultaneously
on a common task by providing an interface
To appear in Proc. of ACM International Conference on Supporting
Group Work, Nov 14-17, 1999, Phoenix, Arizona, USA
for a shared environment [2]. Groupware systems range from
asynchronous or non-real-time tools such as electronic mail,
to highly interactive synchronous systems such as Real-time
Collaborative Editing Systems (CESs) [1, 5, 13, 16, 14]. CESs
allow multiple users to edit the same document simultaneously
A particular type of CES are the collaborative graphics
editing systems. Collaborative graphics editing systems can
be further divided into two types, object-based and bitmap-
based. This paper examines the concurrency control problem
associated with object-based collaborative graphics editing
systems (OCESs).
In OCESs, objects such as line, rectangle, circle, etc., can
be created. Each object is represented by attributes such
as type, size, position, color, group, etc. Create operations
are used to create objects. After an object has been created,
operations can be applied to change attributes of that object.
For example, a move operation changes the position attribute
of the object it is applied to.
A concurrency control problem arises when concurrent operations
are generated from different sites to change the same
attribute of the same object. These operations are called
conflicting operations. For example, two concurrent move
operations are conflicting if both move the same object to
different positions. The execution of conflicting operations
may result in inconsistency of the shared document amongst
editing sites.
A. Existing work
The existing approaches to resolving the problem of conflict
in OCESs can be classified into two types: locking and
serialization. With the locking approach, a lock is placed on
the object being edited so no other user can edit that object
at the same time. Examples of such systems include: Aspects
[17], Ensemble [12], GroupDraw [5] and GroupGraph-
ics [13].
For locking to work, there has to be a coordinating process
which keeps track of which object(s) has been locked so
it can grant/deny permission for requests to locks. This process
may reside in a central server or in one of the editing
sites. The problem with this is that when an editing operation
is generated, it has to wait for the round trip time of a
message sent to the coordinating process and back, before it
can be executed (if it is allowed) at the local site. Due to this
unpredictable network delay, it may be a long time between
when an operation is generated and when it is executed at
the local site. This results in slow response time. Ensemble
and GroupDraw tried to overcome this problem. In Ensem-
ble, operations guaranteed not to cause conflict are applied
directly without waiting for approval. While in GroupDraw,
locally generated operations are executed right away and a
message is sent to the coordinating process. If the coordinating
process does not approve of the operation, then the effect
of that operation is undone.
With the serialization approach, operations can be executed
as soon as they are generated. When applying an
operation, that operation has to be compared for conflicting
with executed operations. When a conflict is detected, a
total (serial) order between operations is used to determine
which operation's effect will appear. Since the information
needed to determine the total ordering are attached to the
operations, a site can determine the total ordering without
extra communication with other sites. Examples of such systems
are: GroupDesign [10] and LICRA [9]. A key to this
approach is defining the conditions for when two operations
conflict. To define when operations conflict, their commutative
and masking relationships are first defined. Two operations
conflict if they neither commute nor mask with each
other.
Many CES researchers (including Grudin [6], Haake [7] and
advocate the philosophy that a CES should be
a natural extension of a single user editor. It should perform
like a single user editor with additional functions to support
real-time collaboration. A fundamental property of single
user editors is whenever an operation is generated, its effect is
persistent until another operation is generated to specifically
overwrite or undo that effect. Sun and co-authors [14, 15, 16]
proposed an intention preservation effect to preserve this fundamental
property in CESs. In CESs concurrent operations
are not generated to overwrite each other's effect, therefore,
concurrent operations should not change each other's effect.
Even though GroupDraw, GroupDesign and LICRA provide
quick response time, they do not guarantee intention preser-
vation. In these systems, the effect of an operation may be
changed by other concurrently generated operations.
B. GRACE
GRAphic Collaborative Editing system (GRACE) is the
name of the OCES being developed by our group. GRACE
has a distributed replicated architecture. Users of the system
may be located in geographically-separated sites. All
sites are connected via the Internet. Each site runs a copy
of the GRACE software and has a copy of the shared document
being edited. GRACE maintains the consistency of the
shared document at all sites.
GRACE does not use any form of locking for concurrency
control. When an operation is generated, it is executed immediately
at the local site. Then it is sent directly (without
going through a central coordinator) to all remote sites. The
response time of GRACE is as short as any single user ed-
itor. Also, the propagation delay is as fast as the network
would allow. There is no editing restriction placed on the
users. Like a singe user editor, a user can edit any part of
the shared document at any time.
In GRACE, the effects of operations are not serialized.
After an operation is generated, it will be executed at all sites.
The effect of an operation cannot be changed or overwritten
by other concurrently generated operations. The intentions
of all operations are preserved.
In the next section, some definitions and notations will be
introduced. Section III discusses how to preserve the intention
of operations by using the method of object replication.
This section is divided into five subsections. Section III-A
presents the replication effect for any combination of conflicting
and non-conflicting operations. Section III-B discusses
how to uniquely and consistently identify non-replicated and
replicated objects. In section III-C, some definitions are revised
to address the situation where operations may be generated
to edit replicated objects. Section III-D presents the
object replication algorithm to execute operations and provide
consistent replication effects. Section III-E uses an example
to illustrate how a replication effect is produced by
applying the described techniques. Lastly, section IV does
further comparison of GRACE with other approaches and
finishes with a conclusion.
II. Definitions and notations
GRACE adopts the consistency model of the REDUCE
system [15, 16]. There are three consistency properties in
this consistency model: causality preservation, convergence
and intention preservation. The definition of these properties
relies on the causal ordering and dependency relationship
amongst operations.
Definition 1: Causal ordering "!" and dependency relationship
ffl Given two operations Oa and Ob , generated at sites i
and j, then Oa ! Ob , iff: (1) and the generation
of Oa happened before the generation of Ob , or (2) i
and the execution of Oa at site j happened before the
generation of Ob , or (3) there exists an operation Ox ,
such that Oa ! Ox and Ox ! Ob .
ffl Given any two operations Oa and Ob . (1) Ob is said to
be dependent on Oa iff Oa ! Ob . (2) Oa and Ob are said
to be independent (or concurrent) iff neither Oa ! Ob ,
nor which is expressed as Oa jjOb .
A state vector is a logical vector clock. Each site maintains
a state vector. Whenever an operation is generated at a site,
it is time-stamped with the state vector value of that site. By
comparing the state vector values between two operations,
their dependency relationship can be determined [16].
This paper focus on one consistency property, intention
preservation. The intention of an operation and the intention
preservation property are described in Definitions 2 and 3.
Definition 2: Intention of an operation
Given an operation O, the intention of O is the execution
effect which can be achieved by applying O on the document
state from which O was generated.
Definition 3: Intention preservation property
For any operation O, the effects of executing O at all sites
are the same as the intention of O, and the effect of executing
O does not change the effect of independent operations.
The intention of a create operation will always be pre-
served, since create operations create new objects and will
not be affected by other operations. The discussion of intention
preservation applies to all simple modification operations
with the following properties:
ffl A simple modification operation targets one object and
its effect is limited to only that object;
ffl A simple modification operation changes only one attribute
of its target object.
Examples of such modification operations are: move, resize,
fill, etc.
Some notations are introduced to facilitate discussions.
Let C be any create operation. Let O be any simple modification
operation. Att:type(O) denotes the attribute O is
modifying, e.g. position, size or color. Att:val(O) denotes
the attribute value O is changing to, e.g. the new position
for the object. G denotes any graphic object in the shared
document. It is assumed all operations are executed in their
causal orders.
III. Object replication
In GRACE system, if all operations are generated causally
after each other, then simply executing the operations in accordance
to their causal ordering will preserve the intentions
of all operations. However, operations may be generated in-
dependently. The intention of an operation will be preserved
if its effect is not changed by independent operations. The
effect of an operation will not be changed by another independent
operation if they are editing different objects.
For operations editing the same object, they will not
change each other's effect if they edit different attributes. Independent
operations editing the same attribute of the same
object will not change each other's effect if they change the
same attribute into the same value. Only when independent
operations change the same attribute of the same object to
different values, intention violation occurs. Operations with
this type of relationship are called conflicting operations. For
all the operations mentioned in the rest of this section, they
will be assumed to have targeted the same object when they
were generated (unless stated otherwise).
By direct comparison between two operations, Oa and Ob ,
a conflict relationship between them can be determined if
Oa and Ob are independent, targeting the same object, and
changing the same attribute to different values. This type of
relationship is called conflict.
Definition 4: Conflict relation
Given two operations Oa and Ob targeting the same object,
Oa and Ob have the conflicting relation, denoted by
Oa\Omega
ffl Oa jjOb ,
Apparently, the effects of conflicting operations cannot co-exist
in the same object. If
Oa\Omega Ob , and both operations
target object G, then applying both operations to G will result
in intention violation. The only way to preserve the
intentions of both operations is to make two replica objects
from the original object, and then apply the two conflicting
operations to the two replicas separately. This method is
called the object replication. The resulting effect is that
no conflicting operations are applied to the same replica. To
preserve the intentions of Oa and Ob , replicas Ga and Gb
will be made from G, for the application of Oa and Ob re-
spectively. Since the replicas represent the original object, G
will be replaced by Ga and Gb . These two replicas are distinguished
by unique identifiers assigned during replication
so new operations can be generated to edit specific replicas
(how to determine object identifiers will be discussed later).
In contrast to the conflicting relationship, operations which
are not conflicting have a compatible relationship. If operations
Oa and Ob are not conflicting, then they are compatible,
denoted by Oa fiOb . The effects of compatible operations can
be applied to the same object without causing intention violation
Conflicting/compatible relationships between two or more
operations can be expressed by compatibility expressions in
which\Omega and fi are the operators and operations are the
operands. Brackets are used to indicate the scope of the
operators. For example, the expression of
means Oa and Ob conflict and Oc is compatible with both
Oa and Ob .
A. Replication effect
Based on the idea of making replica objects to preserve the
intentions of conflicting operations, this section will present a
specification detailing the replication effect which should be
produced. The replication effect of a simple modification operation
O depends on the conflicting/compatible relationship
between O and the operations applied to O's target object.
The notion of a compatible group (set) will be introduced.
A compatible group of an object G, denoted by CG(G), is
the set of simple modification operations applied to G. All
operations in the same compatible group must be
compatible.
Suppose operation O targets object G. Applying O to G is
equivalent to applying O to the compatible group of G. Let
Apply(O; CG(G)) denote the effect of applying O to CG(G).
This effect contains one or more compatible groups. If O
is compatible with all operations in CG(G), then applying
O to CG(G) results in Apply(O; fOg.
The result is a compatible group containing operations in
CG(G) and O. If a sub-group of CG(G), CF , is conflicting
with O, then applying O to CG(G) results in two compatible
groups, Apply(O; fOg.
The first group or replica contains all operations in CG(G).
The second group or replica contains O and the operations
in CG(G) which are compatible with O.
Example 1: There are three operations with the compatibility
expression of Oa fi
(Ob\Omega Oc ). All three operations
target object G where g.
(a) Application in the order of Oa ;
Ocg.
(b) Application in the order of
for replicas Gb and Gc respectively;
Oag.
Example 1(a) shows the replication effect of applying Oc to
a compatible group containing both conflicting and compatible
operations. The underlined compatible groups are the
final effects produced. Example 1(b) shows in order to produce
the same effect as Example 1(a), Oa needs to be applied
to both Gb and Gc which are replicas of G. From this, it is
observed that for any operation O, if the target object of O
has been replicated, then O needs to be applied to replicas
of G. The term candidate objects will be used to describe
the objects needed to be considered when applying an op-
eration. The candidate objects of operation O is either the
target object G of O, or the replicas made from G. O should
be applied to a set of compatible groups corresponding to the
set of candidate objects. This set is called the candidate set.
it follows that the replication effect of O is the compatible
groups produced by applying O to its candidate set.
Definition 5: Candidate set of O
For any operation O, the candidate set of O, denoted by
CS(O), is a set of candidate objects' compatible groups for
O.
Definition Replication effect
The replication effect of operation O is the compatible groups
produced by applying O to its candidate set, denoted by
Apply(O; CS(O)).
When a replica is made for Oc , it contains all compatible
operations applied to the original object, as shown in Example
1(a). To produce a consistent effect, Oa is applied to
all replicas which contain operations compatible with Oc , as
shown in Example 1(b). The result is that, for any operation
O, if operation Ox belongs to a compatible
group in CS(O), and Ox is compatible with O, then
there must be a compatible group in the replication
effect of O which contains both O and Ox . The final
effect of Example 1 is such that, Oa fi Ob is reflected in Gb
and Oa fi Oc is reflected in Gc .
Replication is required only when there are conflicting
operations. Conflicting operations are applied to different
replicas of the original object. Since all candidate objects
are replicated from the same original object, for any pair
of compatible groups CGx and CGy in CS(O), there are
operations Ox in CGx and Oy in CGy such that Ox conflicts
with Oy . When applying operation O to its candidate
set, if O can be incorporated into existing compatible groups
in CS(O), then no new replica should be made. This is
shown by the application of Oa in Example 1(b). Therefore,
the compatible groups in the replication effect of O,
is such that for
any pair of compatible groups CG i and CG j , where
must be an operation in CG i
which conflicts with an operation in CG j .
Compatibility expression Replication effect
1.
Oa\Omega
2.
Oa\Omega (Ob fi Oc) fOag, fOb ; Ocg
3. (Oa fi
4.
5.
6.
In summary, three conditions have been presented which
determine the replication effect. It can be shown that given a
group of operations, there is only one valid replication effect.
The table above shows the replication effects for six different
combinations of conflicting/compatible relationships. It can
be observed that, for expressions 1, 2, and 3, the resulting
compatible groups are the groups of operations on either side
of
the\Omega symbol. This is to be expected since conflict is the
only reason replicas are made. In expressions 4 and 5 there is
an operation, Oc , whose compatible scope spans across two
or more groups of operations with conflicting relationships.
If Oc is distributed to each group separated
by\Omega , then the
replication effect is the groups of operations separated
by\Omega .
Expression 6 is an extension of expression 4, where there are
two groups of conflicting operations separated by fi. Distribution
is done for each operation in one group with each
operation in the other group. For each compatibility expression
in the table, the replication effect presented is the only
valid replication effect for that expression.
B. Object identification
An object identifier is needed for each object to ensure that
for any operation O generated to edit object G, the same G
or replicas of G can be identified at all sites. When O was
generated, the target object field of O, denoted by Obj(O),
is assigned the object identifier of G. When O is received at
remote site i, Obj(O) is used to search for G at site i. If G
has been replicated at site i, then the replicas of G need to
be identified. For this scheme to work, three properties need
to be maintained:
1. uniqueness - every object at the same site need to have
an unique identifier.
2. traceability - by using the identifier of any object G, the
replicas made from G can be determined.
3. consistency - the same object at different sites have the
same identifier.
The object identifiers utilize the unique identification properties
of operations' identifiers. Each operation O can be
uniquely identified by an Operation Identifier (Oid), denoted
by Oid(O), composed of a pair (sid; lc), where sid is the identifier
of the site from which O was generated, and lc is the
sum of elements of the state vector associated with O.
For any object G, the Graphics-object Identifier (Gid) for
G, denoted by Gid(G), is composed of a set of operation
identifiers:
The identifier of an operation O is included in the
identifier of object G if either (1) O is the create
operation which created G, or (2) O has been applied
to G and O conflicts with an operation Ox .
Applying an operation O to an object G may change
the identifier of G. Therefore, object's identifier will be included
as part of the effect. The effect of applying O to G,
Apply(O; CG(G)), is one or more pairs of (CG, Gid). The
first component in the pair is the compatible group of the
resulting object. The second component is a set of operations
those identifiers are in the resulting object's identifier
set. If C is a create operation, then applying C will result in
(f
Example 2: There are four operations. Three compatibility
expressions are required to express their relationships:
Oa fi
Ob)\Omega Od and Oc fi Od 1 . All four
operations target object G. Initially and
created G. Assuming Oa ; has
already been applied, i.e. Apply(Oa ; f fCg). The
rest of the operations are applied in the order of:
(a) i. Apply(Ob ;
O c is required to make this example valid.
({Oa, Oc}, {C, Oc, Oa}) ({Oc, Od}, {C, Oc, Od})
({ }, {C})
Apply Oa
Apply
Apply Od
Apply Oc
Apply Od
Fig. 1. The execution of operations in Example 2(a)
(b) i. Apply(Od ;
The application of an operation O to its candidate objects
may change identifiers of the resulting candidate objects. After
applying O to CS(O), if O is in CG(G), then the identifier
of G may be changed as follows:
1. O is compatible with all operations in CS(O), then
Oid(G) is not added to Gid(G). This is shown by the
application of Ob in Example 2(a)(i).
2. O caused the replication of G from G 0 , then
fOid(O)g. This is shown by the application
of Oc in Example 2(a)(ii) where Oc caused fOa ; Ocg to
be replicated.
3. O did not cause replication, but O conflicted with some
operation in other candidate objects, then
CG(G)+fOid(O)g. This is shown by the application of
in Example 2(b)ii. Ob did not cause the replication
of fOa ; Obg, however, Ob conflicted with Od .
After applying O to CS(O), for any candidate object G 00
of O, if O is not in CG(G 00 ), then there must be an operation
Ox in CG(G 00 ) which conflicts with O. The identifier of G 00
is changed as follows:
1. G 00 is a replica made from G 0 , then Gid(G 00
)g. This is shown by the addition
of Oid(Ob) in Example 2(a)ii. After applying Oc
to CS(Oc ), Oid(Ob) is added into the identifier for
2. G 00 is not a new replica, then Gid(G 00
Oid(Ox ). This is shown by the addition of Oid(Oa ) in
Example 2(a)iii. After applying Od to CS(Od ), Oid(Oa)
is added into the identifier for fOa ; Obg.
If object G is created by operation C, then
fOid(C)g. A create operation only creates one object.
Therefore, a non-replicated object is uniquely identified by
the identifier of its create operation. A replica's identifier
will contain the identifier of the operation which created the
original object and the conflicting operations in its compatible
group. This is because conflicting operations are applied
to different replicas. Therefore, replicas are uniquely identified
by the identifiers of conflicting operations. For exam-
ple, after the execution of two operations, Oa and Ob , where
Oa\Omega Ob and
Gb are produced with
)g.
Any replica made from G would have Oid(C) in its identification
set. Suppose Oa is the cause of replication for Ga .
Any operation targeting Ga must be causally after Oa and
hence compatible with Oa . Therefore, any replica made from
Ga would contain Oid(C) and Oid(Oa ). For any three objects
G, G 0 and G 00 , where G 0 is a replica of G and G 00 is a
replica of G 0 , Gid(G) is a subset of Gid(G 0 ) and Gid(G 0 ) is
a subset of Gid(G 00 ), or Gid(G) ae Gid(G 0
ensures traceability.
If operation Oa is found to be conflicting with Ob at a site,
then Oa and Ob must be conflicting at all sites. For any combination
of conflicting/compatible relationships there is only
one valid replication effect. To achieve the same replication
effect at all sites, the same operations must be applied to the
same object. The identifier of an object is determined by the
operations applied to that object and their conflicting rela-
tionships. Since both of these are the same at all sites, so
object identifiers are consistent for objects at all sites.
C. Revised conflict definition
The definition for conflict in Definition 4 requires two operations
to target the same object or
definition is based on operations targeting non-replicated ob-
jects. After replication, two operations may conflict eventhough
their target object identifiers are not equal. Consider
the situation where
Oa\Omega Ob and
fOid(C)g. The execution effects of Oa and Ob are:
Operation Oc is generated
after the replication, i.e. Oc is dependent on Oa and
targets the replica made for Ob , i.e. Obj(Oc
Oid(Ob)g. Operation Od is independent of the
other three operations and Obj(Od fOid(C)g. Od is compatible
with Oa and Ob . Both Od and Oc should be applied to
However, it is possible that
Att:type(Od) and Att:val(Oc) 6= Att:val(Od ). Since Od and
Oc are independent, applying them to the same object would
result in intention violation. The desirable result is that conflict
between Oc and Od is detected and a replica is made for
each operation, as shown in Figure 2.
Two independent operations need to be compared
for conflict if one operation targets an object which
is a candidate object of the other. If the target object
of Oa is a candidate of the target object of Ob , then either
The direct conflict
relation definition is a revised original conflict definition
to include conflicting checking for operations those target objects
are equal or subset of each other.
({Ob, Od},{C, Ob, Od})
({Oa},{C, Oa})
({Oa, Od}, {C, Oa, Od}) ({Ob, Oc},{C, Ob, Oc})
({ }, {C})
({Ob, Oc},{C, Ob})
Apply Oa
Apply Oc
Apply
Apply Od
Fig. 2. Obj(Od ) ae Obj(O c ), however, their conflicting effect results
in replication.
Definition 7: Direct conflict relation
Given two operations Oa and Ob , Oa and Ob have the direct
conflicting relation, denoted by
Oa\Omega D Ob , iff:
ffl Oa jjOb ,
The effect scope of an operation is limited to its candidate
objects, i.e. an operation may only change its candidate ob-
jects. Operations which do not target the same object have
different effect scope. If Obj(Ob ) ae Obj(Oa ), then the scope
of Oa is only a subset of Ob 's scope. If these operations are
applied in different order, then inconsistency may occur. This
is illustrated in Example 3.
Example 3: There are four operations, Oa ;
Operations Oa ; Ob and Od are independent and target object
G where fCg. The compatibility
expression for these three operations are (Oa fi
Od)\Omega D Ob .
Oc is dependent on Oa and Ob , but is independent of Od
and Gid(Oc Obg. Oc is not directly conflicting with
Od . Assume Oa and Ob have been applied to produce the
effect of: (fOag; fC; Oag) and (fObg; fC; Obg). The following
illustrates two different orders of application:
(a) i. Apply(Oc ;
(b) i. Apply(Od ;
In Example 3, operations Oc and Od have different scopes,
CS(Oc) ae CS(Od ). If Oc is applied first then Od can be
applied to fOb ; Ocg since Od fi Oc . This will result in the
replication of fOc ; Odg, as shown in Example 3(a). If Od is
applied first then Od cannot be applied to fObg since
Od\Omega D
. Od is applied to fOag to produce fOa ; Odg. Oc cannot
be applied to fOa ; Odg because it is outside Oc 's scope, as
shown in Example 3(b). Inconsistent results are produced
from these two different orders of execution.
{(Ob),{C, Ob})
({Ob, Oc},{C, Ob, Oc})
({ }, {C})
({Oa, Od}, {C, Oa, Od})
({Oa},{C, Oa})
Apply Oa
Apply Oc
Apply Od
Apply
Fig. 3. The consistent result of Example 3 by detecting indirect
conflict of O c and Od .
Between the two effects in Example 3, effect (b) is the
more desirable effect. This is because Oc is generated to edit
a specific replica of G so its effect should only be limited to
that replica. The result of fOb ; Ocg and fOc ; Odg has the
impression that Oc is applied to more than one replica made
from G, since Obj(Od
Definition 8: Indirect conflict relation
Given two independent operations, Oa and Ob , Oa and Ob
have the indirect conflicting relation, denoted by
Oa\Omega I Ob ,
iff there is an operation Oc and object G such that:
Oc\Omega D Oa ,
The indirect conflict relationship is introduced to solve this
inconsistency problem and to produce the desirable effect.
According to the Definition 8, Od and Oc in Example 3 are
indirectly conflicting. If Od is applied after Oc , all operations
in fOb ; Ocg conflict with Od , so Od cannot be applied to it.
The effect of Example 3 with indirect conflict is as shown in
Figure
3.
Based on direct and indirect conflicts, the conflicting and
compatible relationships are redefined in Definition 9.
Definition 9: Compatibility relationship
Given two operations Oa and Ob , Oa and Ob are:
ffl conflicting, denoted by
Oa\Omega
Oa\Omega D Ob or
Oa\Omega I
ffl compatible, denoted by Oa fi Ob , iff Oa does not conflict
with Ob .
The object identification scheme relies on the definition
of conflict. Due to the changes made to the definition of
conflict, the object identification scheme also needs to be re-
vised. The identifier of a conflicting operation is added to the
object's identifier because conflict causes replication. How-
ever, with the new definition, indirect conflict does not cause
replication. Therefore, for any operation O and object
G where O in CG(G), Oid(O) needs to be in Gid(G) iff
O directly conflicts with an operation Ox .
D. Object replication algorithm
This section presents an algorithm to apply and preserve
the intention of a newly arrived simple modification opera-
tion, Onew . The first step in applying Onew is to determine
Onew 's candidate set. A candidate object of Onew is either:
1. the object G which Onew is targeting, where
2. a replica G 0 made from G, where Obj(Onew
In summary, for any object G, CG(G) is in CS(Onew ) iff:
Obj(Onew
Compatible groups in the candidate set can be classified
into three types. For any compatible group CG i 2
CS(Onew
1. if every operation in CG i conflicts with Onew , then CG i
is said to be fully conflicting with Onew , denoted by
i\Omega Onew .
2. if no operation in CG i conflicts with Onew , then CG i
is said to be fully compatible with Onew , denoted by
3. if CG i contains operations conflicting and compatible
with Onew , then CG i is said to be partially conflicting
with Onew .
All CG i s which are fully conflicting with Onew are put into
a conflicting object set, CF . All CG i s which are fully compatible
with Onew are put into a compatible object set, CP .
Any CG i which is partially conflicting with Onew is partitioned
into two subsets: CG 0
which contains all operations
in CG i which are compatible with Onew and CG 00
which contains
all operations in CG i which are conflicting with Onew .
All CG 00
are added to CF . All CG 0
are put into a replicated
compatible set, RCP . This is shown in step 2 of Algorithm 1.
If operation Ox in any CG i is compatible with Onew
then there has to be a compatible group in replication effect
of Onew which contains both Ox and Onew . This can be
achieved as follows:
ffl for any CG j in CP , add Onew to CG j , and
ffl for any CGk in RCP , add Onew to CGk , where each
CGk correspond to a new replica.
However, with this method of applying Onew , some resulting
compatible groups may not contain any operation which
is conflicting with other resulting compatible groups. This
problem is illustrated in Examples 4 and 5.
Example 4: There are four operations with the compatibility
(Oa\Omega D
Ob\Omega D Onew ) fi Oc . All four operations
are independent and target the same object. Assuming
Oa , Ob and Oc have been applied when Onew arrives. The
candidate set for Onew is ffOa ; Ocg; fOb ; Ocgg.
1. Making replica of fOa ; Ocg produces fOa ; Ocg and
Ocg.
2. Making replica of fOb ; Ocg produces fOb ; Ocg and
Ocg.
2 a compatible group of f g is also fully compatible with Onew
Example 5: There are four operations with the compatibility
(Oa\Omega D (Ob fi Onew )) fi Oc . All four operations
are independent and target the same object. Assuming
Oa , Ob and Oc have been applied when Onew arrives. The
candidate set for Onew is ffOa ; Ocg; fOb ; Ocgg.
1. Making replica of fOa ; Ocg produces fOa ; Ocg and
Ocg.
2. Adding Onew to fOb ; Ocg produces fOb ; Oc ; Onewg.
In Example 4, Onew is partially conflicting with all compatible
groups in CS(Onew ). Applying Onew has produced
two compatible groups which are identical. In Example 5,
Onew is fully compatible with a compatible group and partially
conflicting with the other group in CS(Onew ). Applying
Onew has produced two compatible groups in which one is
the subset of the other. In both examples, the results contain
two compatible groups that are compatible with each other.
Only one compatible group is needed to accommodate the
effects of operations in both groups.
This problem of producing unnecessary replicas can be
avoided by eliminating the unnecessary compatible groups
in RCP . If any CGk in RCP is equal or a subset of any CG j
in CP or RCP , then all operations in CGk are compatible
with all operations in CG j . Therefore, the replica with CGk
does not need to be made, and CGk should be removed from
RCP . This is shown in step 3 of Algorithm 1.
There is a special situation where all compatible groups in
the candidate set are fully conflicting with Onew . When this
happens CP and RCP will both be empty sets. Since no operation
is compatible with Onew , a compatible group/replica
containing fOnewg should be constructed.
Algorithm 1: Object Replication Algorithm
Onew : the new modification operation to be executed.
conflicting object set, initially f g.
object set, initially f g.
replicated compatible set, initially f g.
1. Get candidate set,
2. For each CG i in CS do
if
Onew\Omega CG i then CF := CF
else if Onew fi CG i then
else split CG i into CG 0
i and CG 00
i such that:
Onew fi CG 0
Onew\Omega CG 00
3. Merge compatible subsets or equal sets:
for any pair CG i and CG j in CP or RCP do
if CG i ' CG j then remove CG i .
4. If then
make CGn+1 := fOnewg;
for each CG i in CP do
for each CG i in RCP do
Previously executed operations are kept for conflict check-
ing. These operations are kept in a list called History Buffer
(HB). To avoid the history buffer to become infinitely long,
a garbage collection process is carried out periodically to remove
any operation Ox where all operations independent of
Ox have been applied. Each object G maintains a list of
pointers which point to operations in the history buffer which
have been applied to G. HB(G) denotes the list of operations
applied to G.
Replicas are made from an existing object. Every CG i
in RCP comes from an candidate object, which is the base
object from which replica for CG i will be made. If
candidate object can be the base object
from which fOnewg will be created. For a compatible group
CG i and base object G, replica object is made as follows:
1. make an exact copy of G, call it G
2. undo operations in HB(G 0 ) in the reverse execution
order until all operations in HB(G 0 ) are members in
3. redo any operations in CG i which have been undone
from HB(G 0 ) in step 2.
During the undo process, the operation identifiers for operations
not in CG i are also removed from Gid(G 0 ).
The steps needed to assign object identifiers have been deliberately
separated from the object replication algorithm to
avoid introducing extra complexity into the algorithm. After
Onew has been executed via the object replication algorithm,
if the resulting CF set is f g, then Onew does not conflict with
any operation. Otherwise, the following needs to be done:
1. Construct a set DC which contains Onew and any operation
directly conflicting with Onew .
2. For any operation Ox in DC and object G where Ox in
Compatible groups in CF contain all operations directly or
indirectly conflicting with Onew . So the search for operations
directly conflicting with Onew can be limited to CF . The operations
which are directly conflicting with Onew are limited
to the candidate objects of Onew (the candidate objects now
include the replicas resulted from the application of Onew ).
So the search for the objects whose identifiers need to be
changed can be limited to the candidate objects for Onew .
Various optimizations are possible, but beyond the scope of
this paper.
E. An integrated example
In this section, an example will be presented to illustrate
the object replication algorithm, object identification scheme
and the revised compatibility relationship.
There are six operations, Oa ; Of . Four
operations are independent with each other and with the
compatibility expression: ((Oa fi
Od)\Omega D Ob) fi Oc ). All
four operations target object G created by operation C, i.e.
fOid(C)g. Oe and Of are generated causally after Oa and Ob .
Each Oe and Of targets a replica,
and Obj(Of Oid(Ob)g. Both operations are compatible
with Oc , but conflict with Od ,
Oe\Omega D Od and
Of\Omega I Od .
Initially, g. Two different orders of execution will
be illustrated.
Execution order 1:
After merging:
Execution order 2:
After merging:
To apply an operation, its candidate set is first determined
by using object identifiers. Then those compatible groups are
classified into CF;CP and RCP sets. By using these sets, the
replication effects are produced. For both execution orders,
merging of compatible groups are required to avoid producing
unnecessary replica. This is shown during the execution of
Oe in Execution 1 and Od during Execution 2.
The final results for both execution orders are consistent
and satisfy the conditions for replication effect. This example
is used to illustrate various techniques used to produce
the replication effect. For such a complex combination of
conflicting/compatible operations to occur during editing is
possible but unlikely.
IV. Discussion and conclusion
This paper investigated the problem of operational conflict
in object-based collaborative graphics editing systems.
First our paper described the locking and serialization approaches
to the problem of conflicting operations. Both approaches
have the problem of not preserving the intentions of
all operations. Finally, our paper presented a new replication
approach that preserves all operation intentions.
The locking approach to preventing conflict from occurring
suffers the serious problem of slow response times over wide
area networks. The optimistic operation execution approach
solves this problem by executing operations locally as soon
as they are generated, before sending them to remote sites.
However, with this approach, conflicting operations may be
generated. The existing approach of serializing the conflicting
operations results in intention violation where the execution
of an operation changes the effect of a concurrent operation.
This may lead to confusion. If user 1 generates Oa to move
object G to X and at the same time user 2 generates Ob to
move G to Y and the system decides Ob has a higher priority
(or a serialization order). The end result at all sites will be
consistent, i.e. G will be at Y , but the whole process can be
confusing to the users. Both users will see their operations
executed at the local site, so they will assume the same execution
effect at all sites. However, the effect of Oa will never
appear to user 2. User 1 saw the execution of Oa followed
by Ob , so he/she may think that user 2 saw G at X and still
decided that Y would be a better location. Another possible
source of confusion is that the users may communicate and
find out that user 2 did not see Oa , so they may think there
are some network problems which have led to the mysterious
disappearance of Oa .
The replication approach solves this problem. By applying
Oa and Ob to replicas of G, user 1 will see that user 2 wants G
at Y , and user 2 will see that user 1 wants G at X. This provides
visual display of what each user wants and conveys the
intention of each user. By using this information, users can
compare the solution and decide on the best result. Instead
of the system deciding for the user based on unrelated information
(e.g. the serialization order), the replication approach
lets the users decide on the outcome by providing them with
the required information.
Tivoli [11] is a collaborative editing system that uses object
replication to resolve conflict. However, at the user
level, Tivoli behaves like a bit-map based informal scribbling
medium (i.e. a whiteboard). Hence, the operations
supported by Tivoli are quite different from the ones supported
by object-based graphics editing systems. At the implementation
level, all Tivoli objects are immutable, which
means they can only be created and deleted but not modi-
fied. To apply an operation to an object, that object is first
deleted, then an object with the new attribute values is cre-
ated. When two operations are concurrently applied to the
same object, that object will be deleted, and two new objects
will be created (one for each operation). This method does
not distinguish whether these two operations are compatible
or conflicting. Therefore, it does not allow compatible operations
to be applied to the same object (e.g. concurrent move
and fill operations cannot be applied to the same object).
This causes unnecessary replicas to be made.
With the replication effect of the GRACE system, conflicting
operations are applied to different replicas while compatible
operations targeting the same object are applied to the
same object. A replica is made only for conflicting opera-
tions, so no unnecessary replica is created. An object identification
scheme is introduced to uniquely and consistently
identify non-replicated and replicated objects. The property
of object identifier allows the determination of which replicas
an object is replicated into. This property is utilized by the
object replication algorithm to produce the replication effect.
Replication alone is not a complete solution for resolving
conflicts. Other supporting features should be integrated
into the system to work in conjunction with object replica-
tion. A group awareness mechanism should be devised to
help users preventing conflict. A conflict awareness mechanism
is needed to inform users that a conflict has occurred.
It should provide users with the information about the conflicting
operations, the replicas, and the users who caused the
conflict.
The simple modification operations are the most commonly
used operations in object-based graphics editing sys-
tems. It is expected that some operations, can be implemented
as simple modification operations if additional object
attributes are introduced. For example, a group attribute
can be introduced to indicate the group an object belongs
to. This can be used to implement grouping and ungrouping
operations. Work is underway to investigate several other
types of operations in order to maximize our finding or to be
used in more advanced object-based graphics editing systems.
Acknowledgments
The work reported in this paper has been partially supported
by an ARC (Australia Research Council) Small Grant
and a Large Grant (A49601841).
--R
Concurrency control in groupware sys- tems
Groupware: some issues and experiences.
An assessment of group support systems experiment research: Methodology and results.
Real time groupware as a distributed system: concurrency control and its effect on the inter- face
Issues and experiences designing and implementing two group drawing tools.
Why CSCW applications fail: problems in the design and evaluation of organizational interfaces.
Supporting collaborative writing of hyperdocuments in SEPIA.
LICRA: A replicated-data management algorithm for distributed synchronous groupware application
Groupdesign: shared editing in a heterogeneous environment.
Some design principles of sharing in Tivoli
Implicit locking in the Ensemble concurrent object-oriented graphics editor
GroupGraphics: prototype to product.
Operational transformation in real-time group editors: Issues
A generic operation transformation scheme for consistency maintenance in real-time cooperative editing systems
Achieiving convergence
Groupware grows up.
WSCRAWL 2.0: A shared whiteboard based on X- Windows
--TR
Why CSCW applications fail: problems in the design and evaluation of organization of organizational interfaces
Concurrency control in groupware systems
Groupware: some issues and experiences
Supporting collaborative writing of hyperdocuments in SEPIA
Implicit locking in the ensemble concurrent object-oriented graphics editor
Real time groupware as a distributed system
LICRA
A generic operation transformation scheme for consistency maintenance in real-time cooperative editing systems
Achieving convergence, causality preservation, and intention preservation in real-time cooperative editing systems
Operational transformation in real-time group editors
--CTR
David Chen , Chengzheng Sun, Optional and responsive locking in collaborative graphics editing systems, ACM SIGGROUP Bulletin, v.20 n.3, p.17-20, December 1999
Liyin Xue , Mehmet Orgun , Kang Zhang, A multi-versioning algorithm for intention preservation in distributed real-time group editors, Proceedings of the twenty-sixth Australasian conference on Computer science: research and practice in information technology, p.19-28, February 01, 2003, Adelaide, Australia
Du Li , Limin Zhou , Richard Muntz, The gods must be crazy: a matter of time in collaborative systems, ACM SIGGROUP Bulletin, v.20 n.3, p.21-25, December 1999
Mihail Ionescu , Ivan Marsic, Tree-Based Concurrency Control inDistributed Groupware, Computer Supported Cooperative Work, v.12 n.3, p.329-350,
David Chen , Chengzheng Sun, Undoing any operation in collaborative graphics editing systems, Proceedings of the 2001 International ACM SIGGROUP Conference on Supporting Group Work, September 30-October 03, 2001, Boulder, Colorado, USA
Operation Propagation in Real-Time Group Editors, IEEE MultiMedia, v.7 n.4, p.55-61, October 2000
Chengzheng Sun , David Chen, Consistency maintenance in real-time collaborative graphics editing systems, ACM Transactions on Computer-Human Interaction (TOCHI), v.9 n.1, p.1-41, March 2002
Nicolas Bouillot , Eric Gressier-Soudan, Consistency models for distributed interactive multimedia applications, ACM SIGOPS Operating Systems Review, v.38 n.4, p.20-32, October 2004
Sandy Citro , Jim McGovern , Caspar Ryan, Conflict management for real-time collaborative editing in mobile replicated architectures, Proceedings of the thirtieth Australasian conference on Computer science, p.115-124, January 30-February 02, 2007, Ballarat, Victoria, Australia | distributed computing;concurrency control;collaborative editing;graphics editing;consistency maintenance |
320322 | Use of Virtual Science Park resource rooms to support group work in a learning environment. | This paper presents a detailed evaluation on the acceptability of a range of synchronous and asynchronous collaborative tools provided within the Virtual Science Park (VSP) for group work in a learning environment. In this study, the VSP was used to provide a web-based 'Resource Room' adopting the familiar 'folder' metaphor for structuring and linking resources, and a number of different user interfaces for interaction and sharing information. A list of criteria is established for the evaluation. By using scenario testing and structured questionnaires, qualitative feedback was collected from 43 Masters students. The findings reinforce and add to the concerns highlighted in other studies, in particular, the importance for shared awareness, privacy, conventions for interaction and the provision of an effective multimedia environment. More attention will be needed in these areas for effective use of these groupware tools. | INTRODUCTION
The aim of this paper is to explore the effective use of
a range of groupware tools provided in the Virtual
Science Park (VSP) for group work in a potentially
distributed learning environment. It examines the level
of support being offered to students in order to gain
access to tutors and for students to work together in
small groups outside the 'traditional' teaching
environment. The VSP has been developed at the
University of Leeds since 1995 [7,24]. It uses a range
of information and communications technologies (ICT)
to facilitate technology transfer from university to
industry. Only a subset of its functionality was used in
this study.
The motivation behind this paper was driven by two
areas of development. Firstly, there is the changing
role of advanced ICT in today's societies. Applications
such as electronic mail and the World-Wide-Web have
diffused into all walks of life. The concept of
electronic mail was first developed in the 1970s
ARPANET project. It took around a decade to become
a commercial product, mainly for big organisations,
and another decade to reach schools and homes in the
western world [17]. World-Wide-Web, on the other
hand, has a much shorter history. It was first invented
in 1990 by Berners-Lee as part of a CERN (European
Particle Research Centre) project and took no more
than 5 years to reach different communities.
Interestingly, real-time collaborative tools (such as
desk-top conferencing systems and joint authoring
tools) have received a rather mixed reception. A
number of research projects in the Computer
Supported Co-operative Work (CSCW) and Human-Computer
Interaction (HCI) communities have been
examining the systems and the human issues
[1,3,18,28,30]. It would be informative to find out to
what extent these research ideas have been adopted in
today's commonly-available commercial products.
The second driving force behind this study is the
increasing demand on universities to deliver flexible
and timely training at the tertiary level and beyond.
This comes in a range of different forms such as
distance learning, continuing professional education,
life-long learning and industry-led programmes. It is
likely that most of these students might not be campus-based
and the provision of training or support will be
on-demand. This opens up a range of new challenges
for university teachers - for examples, how to deliver
effective support 'on-demand' and how some of the
innovative forms of teaching, such as the use of group
work, could be conducted in this environment.
The rest of the paper begins by explaining the
background to the VSP and how the resource room
was setup which provided the basis for the trial. The
interaction required by co-operative learning is
outlined, and issues raised in other related CSCW/HCI
projects are discussed. The paper then reports on an
evaluation of how well the collaborative tools
(synchronous and asynchronous) were received in a
trial involving students and lecturers responsible for a
Masters-level module. The feedback was collected
using scenarios and questionnaires. This evaluation
was undertaken against a framework for systematic
analysis. The outcome will inform the designers and
potential users of these collaborative systems.
The Virtual Science Park has been created out of a
research and development programme in the
interdisciplinary Centre for Virtual Working Systems
at the University of Leeds. It is now being exploited
commercially via a majority-owned University
company, VWS Ltd. The Virtual Science Park
provides an on-line, Web-based environment for its
tenants to develop their businesses. Tenants use the
VSP for a variety of services. The most common are
information brokerage, on-line consultancy, support
for collaborative projects and, increasingly, graduate,
professional or executive education and training.
The VSP tool-suite is constantly evolving but includes,
in addition to the browser facilities inherent in the
rooms' metaphor, a range of knowledge-management
and collaborative working tools which allow tenants to
manage and exploit their knowledge resources. These
tools include structured directory services fed by an
underlying information model, wide exposure to search
engines, a document management system that allows
full control over the upload and retrieval of any type of
electronic multimedia content, bulletin boards, alert
services and integrated real-time collaborative working
tools. Additionally, there is a powerful search
capability across all information entities in the VSP
directory.
Drawing on experience gained through pilot projects in
work-based learning that involved partnership between
employees at the workplace, industrial mentors and
university tutors, the VWS team have refined the
provisions in the VSP to offer full support for
customer-orientated graduate, professional and
executive education and training. The service is
specifically designed to enable small groups of learners
to work in their own time and at their own
convenience, with help from industrial and academic
mentors, and to have access to a relevant set of
learning resources. The VSP is ideally suited to the
development of high-level skills to small groups. It is
not designed for the mass distribution of education to
large groups.
For the providers of educational services, whether
these be traditional academic tutors or company staff-
development trainers, the VSP offers a range of
services that allow the providers to manage their
resources, to dynamically grow them and to facilitate
re-use of material in different contexts. For the
consumers of educational services there is easy and
focused access to relevant resources, whether these are
information resources or domain experts, opportunities
for formal and informal group work and for social
interaction. The VSP offices with integrated
videoconferencing and collaborative working tools
support real-time supervision and tutorials, whilst
discussion forums support asynchronous discussions
within and between learners and tutors.
The VSP staff will help tenants to customise a
complete on-line learning environment which will
draw in relevant combinations of the VSP tool kit.
Additionally, because the learning environment is part
of the wider science park provision, there is access to
the full range of expertise and knowledge resources
publicly available in the VSP.
The VSP resources are now being used by several
major corporate clients for the career development of
their senior and middle management staff. They are
also being used by a consortium of 3 universities, 24
companies and the relevant trade association dedicated
to strategic visioning and scenario planning for senior
executives in small and medium-sized packaging and
associated industries. The VSP is also being used to
support an international partnership involving a British
university, a German university and a global IT
company to develop on-line MSc modules. The
systems have also been extensively and critically
evaluated by academic staff and a post-graduate group
of about 45 students in Leeds, the findings from which
are reported below.
A RESOURCE ROOM FOR GROUP LEARNING
In the context of a learning environment, a resource
room is a virtual place where students can access
electronic learning resources provided by the
lecturers/tutors, find peer and tutor support, as well as
a place to 'work' with other students. If desired, it can
also be used for students to deposit the relevant
material that they obtained or created. A resource room
uses the familiar 'folder' metaphor for structuring
resources (uploaded files, web links, and textual
notes). A student will gain access to the VSP using a
web browser and a password. Appropriate software
and hardware, such as Microsoft NetMeeting
collaborative tools and audio/video capture cards for
videoconferencing, are required for synchronous
interaction.
In the VSP, the managers of a resource room (usually
the lecturers/tutors) decide how to subdivide the
workspace, what functionality to include and how to
define the access controls within the area. For the
Masters module in this study, a set of folders was
provided containing various resources. These included:
. areas for working - where students would find
information about group projects and their private
study room for group work. Only members of the
group and the tutors could access a group study
room. Within each study room, students had a
discussion board for their own use and a contact
area for initiating Microsoft NetMeeting sessions
with the other group members. The group members
could decide how to structure their shared
workspace but they were not allowed to change the
access permission enabling tutors to monitor the
resource room use. Figure 1 shows a view of the
resource room in study area of group B. There are
folders for the group contacts, project work, and a
link to the groups' bulletin board.
. the ability to contact the tutor - where students can
either directly initiate a conferencing session with
the tutor/ email him/her or enter the tutor's virtual
office and use the facilities provided there (see
figure 2).
. learning materials - where students find lecture
slides and pointers to papers or web sites classified
according to subject areas. Figure 3 shows a view
of a few of the resources available in the 'Distance
Learning' folder.
The resources described above provided an
environment through which the use of asynchronous
facilities (such as email, discussion board and shared
document areas) and synchronous facilities available in
a typical desk-top conferencing system (such as chat
and shared applications, with or without audio/video
support) could be explored. A range of viewers was
also provided for viewing documents which were
produced using different tools.
This set up echoes the 'Common Information Space
(CIS)' concept articulated by Bannon and Bodker [2].
It also highlighted the 'dialectical nature' of CIS -
openness versus closure - by the way in which tutors
provide information in the subject folders which are
then open to the whole class. However, the facilities
and information in each group study room were only
available to the members of that group (and under the
prying eyes of the tutors).
It is also noted that the asynchronous environment in
the resource room is similar in some ways to
applications such as 'teamroom', based on
Lotus/Domino [9] and BSCW [5].
Figure
1. The resource room in study group B's area
Figure
2. Tutor's office - the colloborate link initiates a
NetMeeting session with the tutor
Figure
3. View of resources in the 'Distance Learning'
folder
ELEMENTS OF CO-OPERATIVE LEARNING
In order to appreciate the level of support needed from an
'ideal' system in a virtual learning environment, it is useful
to outline the characteristics of co-operative learning. It is
not within the scope of this paper to explore the philosophy
behind this kind of innovative teaching method, but the
discussion is focused on the interactive style it requires
amongst the students.
Johnson, Johnson and Smith [21] described co-operative
learning as "instruction that involves students working in
teams to accomplish a common goal". Co-operative
learning is not a synonym for students working in groups. It
must have the following elements in the learning exercise:
. the outcome depends on every member contributing to
the task;
. all students in a group are held accountable for doing
their share of the work and for mastery of all of the
material to be learned;
. it requires students to interact in terms of providing one
another with feedback, challenging each other's
conclusions and reasoning, and perhaps most
importantly, teaching and encouraging one another;
. students are encouraged and helped to develop and
practice trust-building, leadership, decision-making,
communication, and conflict management skills; and,
. members periodically assess what they are doing well
as a team, and identify changes they will make to
function more effectively in the future.
If we wish non campus-based students to be able to benefit
from this kind of teaching, the use of advanced ICT must
be explored.
Human Issues in Common Information Space
Groupware systems are people systems [9,11] and must
take into account the situated, frequently unstructured
nature of work. Studies had been conducted in the CSCW
and HCI research communities to gain a better
understanding of the subtle, taken-for-granted human
behaviour in order to inform and refine future generations
of groupware. The accumulated wisdom related to the use
of Common Information Space can be categorised in the
following way:
. the importance of shared awareness,
. the need to retain some privacy,
. the importance of having protocols/conventions for
interaction,
. the need to overcome the limitations imposed by the
monitor and keyboard.
It will be enlightening to see, in the evaluation, how far
these ideas have been taken up by today's widely available
products and if any of the concerns have been addressed.
Shared Awareness
Group awareness can be defined as "an understanding of
the activities of others, which provides a context for your
own activities" [8]. The hypothesis generally adopted in the
literature was that providing awareness is a good thing.
Representative projects include - Europarc RAVE media
spaces [12]; University of Toronto's Telepresence [15],
University of Calgary's Group Kit [29]. Gutwin, Greenberg
Roseman suggested a useful list of elements of
workspace awareness - identity, location, activity level,
actions, intentions, changes, objects, extents, abilities,
sphere of influence and expectations. Designers of
groupware systems are still struggling to develop a system
which can cater for all these elements. Various CSCW
projects have also been attempting to address some of the
above elements, although often in an application-specific,
limited, or ad-hoc manner [14].
In addition, the implementation of shared awareness is
often limited by technical factors such as bandwidth and
physical size of the input/output devices. The difficulty in
the implementation of telepointers (multiple cursors) was
reported by Greenberg and Roseman [13]: "Unfortunately,
modern window systems are tied to the notion of a single
cursor and application developers must go to great lengths
(and suffer performance penalties) to implement multiple
cursors".
It may be that, the burden is on us to find out what are the
essential elements without which the concept of distributed
group work will not be feasible; and what are the elements
that are merely 'nice-to-have'.
Privacy
Discussions concerning privacy in groupware usually
centres around security and legislative issues. Bellotti [4]
broadened the issues to include the individual's control
over how much information to share with others, and how
accessible they want it to be. In terms of Information
Access Control, the common mechanism is to implement
technical access control. Other novel ideas to provide social
control were also tried out such as 'ShareMon' which
provides auditory feedback when someone else is snooping
around your shared files over the network [6]. Interpersonal
Access Control makes more use of awareness and feedback
for social control. As there is no inherent reciprocity built
into computer mediated systems, it creates some challenges
to groupware developers. Projects in this area include
CAVECAT media space which provided four selectable
door states (open, ajar, closed and locked) to indicate
availability [25].
Interestingly, Bellotti suggested that having fixed private or
public spaces may well be creating obstacles to
collaboration as it prevents users from experimenting and
adapting to new uses.
Conventions for Interaction
In groupware, communications amongst people are
mediated by shared artifacts. The manipulation of these
artifacts requires coordination at both the human and the
application levels [31]. For example, in a real-time co-authoring
environment, how does one tell if a pause on the
shared application indicates whether the other party is
thinking about the wording of the next phrase or is waiting
for your contribution? Examples of research include IRIS, a
group editor environment [23] and the use of cursors for
gestures [16]. However, a study by Mark et al. [26] showed
that the effective use of conventions takes time to develop.
Beyond Monitor and Keyboard
In the early 90s, the integration of shared workspace and
interpersonal space into one workstation was a
breakthrough; for example, TeamWorkstation integrated
the video window with the text window [19]. Today's
commercial products resemble what TeamWorkstation
offered and also suffer from the same problems such as
having arbitrary seam between different 'spaces'. The lack
of real eye-contact is also a problem in desk-top video
conferencing systems. The ClearBoard metaphor was later
developed [20], which superimposed the interpersonal
space over the shared workspace using a huge screen and a
digitized pen. This overcame the two problems mentioned
but has not yet been seen as a commercial product.
New research work is also being carried out for co-located
and distributed team members under the auspice of
'cooperative buildings' [32]. Unfortunately, the systems are
still largely in research laboratories and may not be
available for commercial use for some time.
We would like to ascertain whether modern ICT can be
used to further enrich the learning of non campus-based
students by involving them in group work requiring a
certain amount of interactions. The first step towards this
aim is to find out the acceptability of the features provided
in the VSP Resource Room which is implemented as a
'Common Information Space' for student groups to
interact. It is important to note the difference between
acceptability and usability [27] with the later being a subset
of the former. The kind of feedback sought was related to
practical acceptability (e.g. usefulness, reliability,
compatibility, performance) rather than pure usability (e.g.
ease of use, learnability and such like).
It was decided to try the experiment with the 43 students in
a Masters module. Many students in the class have previous
work experience. The ratios of UK:EC:International
students are about 4:1:4. The characteristics of the students
matched closely with those who are likely to enrol on a
non-campus based course.
Two means of data collection were devised. Firstly, the
class was divided into groups of 4 or 5. They were asked to
use the VSP Resource Room for this module to test out two
standard scenarios:
1. A representative from each group was to clarify with
the lecturer, at real time, the content of a lecture slide
(which also included making an appointment in advance
for the meeting);
2. Three students from each group were to agree upon and
produce an outline of a group report. This included
sharing ideas at the initial phase through to allocating
responsibility to individuals in order to produce the
final report.
The students were also requested to test 2 to 3 additional
scenarios of their own choice. They were asked to
document the actual happenings while testing the scenarios.
The second set of feedback was obtained by a structured
questionnaire which asked each student to provide
comments on their preferences on the functions provided
by the Resource Room for group working as well as their
preferences for the user-interface, and also their reactions
to various aspects of the Group Study Room. It was made
clear to them that the questionnaires were not assessed and
would only be used for research. A total of 33 completed
questionnaires were received and analysed, the results of
which is reported below.
EVALUATION
The analysis of student responses was conducted at two
levels - human and IT system. The human behavioural
aspects include the individual's attitude and the group
dynamics within the common information space.
Evaluation at the system level includes the functionalities
and the user-interface adopted in the applications.
Use of the VWS Resource Room
In the questionnaire, students were asked to name two
functions of the VWS Resource Room which best support
group work and two functions which hinder group work.
Functions of the VWS Resource Room which best support
group work appear to be the ability to conduct shared work
asynchronously closely followed by the potential for real-time
collaborative work (Table 1). However, functions of
the VWS Resource Room associated with real-time
collaborative work (in particular, lack of control, poor
audio and inadequate feedback) were also most likely to
hinder the progress of the group (Table 2).
Table
1 Functions of the VWS Resource Room which
best support group work
Main Function (with comments) No.*
Asynchronous shared work
. deposit your own contribution for other
members of the group to add to/build upon
. makes the groups own working version of the
assignment document available to all members
of the group
. judge the progress of the group in generalReal-time group work
. work together remotely regardless of time and
location - late at night/from home
. collaborate within a short time span and
conclude on topics
. enhances collaboration and communication
between lecturers and studentsDiscussion board
. post messages for others in the group without
the need to meet
. keeps each other up-to-date with who is doing
what and what still remains to be doneShared storage area
. allows the group to post their work to one
central point
. locate together materials which are related to
each otherActivities related to own work
. create own work area
. manage documents and other resources*No. of times mentioned by students - they were asked to name
any two functions.
At the Human Level
Shared awareness
The main issues raised by students in relation to awareness
within their distributed working environment were centred
around poor feedback from the applications used. This
included lack of feedback about who was on-line; in many
instances, there was an assumption that everybody was
always logged on and had NetMeeting running in the
background which was clearly not the case. Furthermore,
there was no indication to show when other members of the
group had joined a NetMeeting session.
When using the Chat application students found that there
was no visual clue that someone was actively typing a
comment. This frequently led to confusion, for example, it
was often difficult to determine which members of the
group were still 'there' and whether they were still paying
attention.
Table
2 Functions of the VWS Resource Room which
hinder group work
Main Function (with comments) No. *
Real-time group work
. lack of control
. poor audio
. inadequate feedback on what's happening at
the other endActivities related to own work
. resource room is badly structured
. version update facility is not user-friendly
. end up with too many copies/drafts of
documentsAsynchronous shared work
. better version updating information required
. better notification facilities
. ordering on screen of resources submitted
. messy structure quickly propagates without
controlDiscussion board
. not very user-friendly
. poorly designed method of posting messages
. grant individuals the ability to manage their
own messagesShared storage area
. unaware of who else has access to the group
area
. unable to collaborate on earlier versions of
documents*No. of times mentioned by students - they were asked to name
any two functions.
Students sometimes experienced difficulties knowing what
method of communication to use. They felt it would be
useful if NetMeeting had some way of alerting the user on
the current method of communication being used or the
method of communication to be used to initiate a
collaborative working session. For example, at one stage a
lecturer was trying to have a dialogue with a student via the
Chat application, but as the student was frantically trying to
reset audio and microphone settings at home, the Chat
window was minimized on his PC. It would have been
useful for the student to have known which tool to set up in
the first place.
In another case, a student initiated communication with a
lecturer using the Chat tool. However, the lecturer, who had
just conducted an audio meeting with another student,
assumed that he would be using the same tool with the next
student; audio seemed to be his preferred method for real-time
communication with his students. Unfortunately, the
student did not have any audio equipment and the meeting
was delayed until he could set up a microphone and
speakers.
Privacy
The main issues raised by students in relation to privacy
within their distributed working environment were centred
around lecturers accessibility to their work and the
boundaries between other groups taking the module.
Students were asked whether they were aware that the
lecturers on the module could read the content of
everything in their 'private' group study room. Most
students were aware of this or suspected it (there was no
visual clue), and some 73% of students said that this did not
affect their use of the group study room. This is probably
because most people see the group study area as a 'public'
place anyway. The remaining 27% of students felt that the
process of writing their draft reports had been compromised
given that they often wanted to post comments which they
felt were only suitable for viewing by the other members of
the group.
Students found that the boundaries between groups were
tightly drawn and they were not able to interact with other
groups going the module who would also be using the VSP
Resource Rooms. This was an intentional design act by the
lecturers in order to discourage students from collaborating
with anybody other than the group members when testing
the scenarios. However, some students found the closeness
of the group study area was an inconvenience when they
wanted to collaborate or share experiences with someone
from another group.
For the lecturers of the module, a NetMeeting call was an
additional source of interruption on top of the telephone
and students knocking on the door. At one point, a lecturer
was having a meeting in her office while a couple of
students were trying to call her using NetMeeting. It was a
serious distraction and it took a few minutes to send those
calls away. Some students felt that the technology should
go some way in helping lecturers to be more contactable.
A few students had found that the VSP and the group study
rooms offered no completely private places (the contents of
the group study Rooms could be viewed by the lecturers),
and no open public spaces which might provide
opportunities for informal interactions (e.g. akin to the
coffee machine, the corridor, etc.
When students added a document to their group work area
they wanted to be able to explicitly grant or deny
permission to each member of the group. This would have
allowed the formation of sub-groups within the working
area under the direct control of the students.
Conventions for interaction
The main issues raised by students in relation to
conventions for interaction were largely concerned with
establishing general principles for control of the shared
workspace.
Guidelines for interaction were required during a
collaborative session in the following areas:
. taking control during the session;
. opening another application on top of the groups
collaborative work space;
. indicating the desire to relinquish control to another
. and, leaving or closing the session.
On starting NetMeeting, students experienced some
confusion over control as they did not expect that the first
member of the group to click the mouse would retain
control. The fact that when someone else was in control
ones own mouse stopped working made it very difficult to
break in and take control. Some groups resorted to
coordination of file sharing using the telephone, whilst
another group used the Chat board to co-ordinate
ownership of control. Groups found that if all members
wanted to add and amend the document at will then some
co-operation on screen control was essential. Some students
felt they spent too much time fighting for control in their
group and described their collaborative session as 'war-
like'.
The fact that any one member of the group has no control
over the actions of any other member of the group can
sometimes result in chaos. Students in one group were very
annoyed by the actions of one particular member who
insisted on opening non-shared windows over the shared
workspace. The result of this for the other participants was
that the shared window was blacked out. In another group,
the student in control during a NetMeeting session decided
to leave the meeting and shut down his application with the
result that the application on the other two machines also
disappeared. Unfortunately, in this instance, all work was
lost and had to be re-done. In such cases, students clearly
find it difficult to indicate their desire to leave a
collaborative session or are unaware of the consequences
for the other members if they do quit. On reflection, several
students felt their group should have agreed on a unique
for indicating the wish to leave a collaborative
session.
In terms of the ability of the VSP group study room to
create cohesion within a group, almost three-quarters of
students (74%) felt that the study room had united their
group and helped it work together effectively. However,
students considered this was only the case as long as all
members made proper and effective use of the available
features.
Beyond monitor and keyboard
The use of audio/video technology to augment text-based
communications was explored. From the study, there were
clear preferences for different communication media and
different levels of proficiency in using them. Some students
found responses from lecturers were rather brief when
Chat was used. When the communication medium was
switched to audio this was no longer a problem. Clearly
some people dislike too much typing. In other cases,
students found that they had to wait a long time for
messages to appear due to the slow typing speeds of some
members of their group. The setup procedure for audio
technology was not user-friendly. It took some trial-and-
error and sometimes the group had to resort to using a
telephone in order to get the settings right.
The lack of multicasting for audio/video means that only a
one-to-one connection is possible even in a multi-user
conferencing session. This made it unsuitable for group
work involving more than two people. This was obviously
a serious drawback with groups of at least 4 members.
Without an audio link, the users found it very annoying and
confusing to have to switch between different windows
even for a minor task, for example using Chat to
communicate intentions and back to one or more windows
to perform the task.
At the System Level
Functionality
Some of the most important functional requirements arising
from the evaluation are considered here.
For many students this was the first time they had used
NetMeeting for group work in a learning context. Their
reaction to the application was interesting; most students
liked the fun of using it and could see its potential, although
its limitations in addressing the human issues made it the
application that hindered group work the most. Some
students reported that they had only used it because they
were 'forced' to do so in order to adhere to the
requirements of the module. Others felt that the limitations
of NetMeeting had made their collaborative sessions more
of a recreational activity than an opportunity for serious
discussion and collaboration, and found the application was
suitable providing the level of detail required by the work
was not too great.
Most students liked the availability of private group study
rooms which allowed them to structure the space as they
liked and the group area also proved useful in helping them
keep track of the groups progress. Some students noticed
the need for an agreement by all members of the group to
use the room. They also remarked on the need for general
'housekeeping' tasks as in the shared document area it was
easy to end up with a collection of drafts and out-of-date
information. They found it was not easy to take a unilateral
decision to delete files. A temporary 'trash can' was
suggested. Also, the provision of log files which record
transactions such as file modifications and deletions for
each directory was suggested.
Although the group study area was favourably received, the
students did notice a deficiency in integrating their own
workspace (using their favourite tools) with the common
information space. The integration is currently not
seamless.
The provision of scheduling tools for project management
was also considered to be an important requirement.
User Interface
Students clearly preferred a user interface (UI) that is
intuitive and with which they have some degree of
familiarity (i.e. they have used the UI or components of it
before!). A number of metaphors which had been omitted
from the VSP Resource Rooms were requested by the
students including drag and drop functionality, ability to
bookmark pages, and ability to perform tasks across a
group of resources (e.g. deleting a group of files).
Some students could not descipher the meaning of a couple
of icons intuitively and required the help of tooltips. Only
later versions of the browser (i.e. Internet Explorer v4.0 and
above) had this feature.
During a synchronous session the desktop could become
very cluttered, for example, separate windows for chat,
shared applications, and a couple of others for searching
information. This problem was exacerbated for those
students using small monitors.
Some students liked the range of user interfaces provided
by the VSP, from the clickable image of a whole room (e.g.
the lecturer's personal office) to the purely text-based pages
used in the Resource Room. However, other students
clearly had a higher expectation of feeling they were in a
virtual space. Such students commented that a lot of pages
looked similar and were rather dull and uninspiring with
insufficient opportunities for interaction throughout the
VSP site.
Other
Finally, some groups found that the network connection
and its quality were not always reliable, especially when
using modem and an Internet Service Provider (ISP). Also,
some students requested training in the use of the VSP
Resource Rooms and the synchronous and asynchronous
facilities (students were only given a quick demonstration
of the VSP for the purpose of the course, as the rest should
be intuitive!).
Discussion
In terms of acceptability, most students in our evaluation
found that the asynchronous environment worked
reasonably well. However, the synchronous (real-time)
environment was still falling short of being ubiquitous and
was presenting obstacles for group working which requires
interaction in a multimedia space. HCI is, at present,
limited in its application in multimedia systems [22]. There
were some very good ideas in the research communities for
improving real-time collaborative environments (as
discussed in earlier sections), but currently it is still a tough
balancing act for designers of groupware applications: to
provide for awareness, for performance in speed, or for
cost-effectiveness. It was also interesting to have observed
the difference in users' perceived acceptability for
recreational purpose and for 'real-work'.
Nevertheless, interests in using modern ICT in supporting
non campus-based learning has escalated. New projects
continue to emerge such as those conducted in the
Knowledge Media Institute of the Open University in the
UK [10]. It is important that further developments are made
in the commercial products and services for overcoming
some of the limitations highlighted in this study. Firstly,
high-speed broadband network infrastructure is necessary
for implementing techniques for enhancing shared
awareness. A flexible environment is needed which either
enables users to define their privacy requirements or adapts
the access control according to the information captured by
the system. Conventions for interaction can evolve with use
(especially with the younger generation who is now rather
use to chatting on-line) but education on the proper use
should be provided. Virtual Reality might be the area which
will take the on-line learning environment to a different
level of experience.
CONCLUSION
Progress had been made in communications technology and
in understanding the notion of workspaces in recent years.
This paper presented a detailed evaluation on the use of a
virtual learning environment which was set up as a
'resource room' in the Virtual Science Park. The 43
Masters students in the trial provided qualitative feedback
on the use of some commonly available collaborative tools
on the market to inform designers of groupware systems
and their potential users. In general, the students found that
the asynchronous provision of the common information
space adequate and useful for co-ordinating tasks during
the process of group working. The findings also highlighted
some human issues which urgently needing attention before
collaborative tools can be effectively deployed. This is
especially true for synchronous tools, as there is still a lack
of attention on the provision of awareness, privacy,
conventions for interaction and seamless interactive
multimedia spaces which have been identified in the paper
as the important areas for addressing human issues. This
inadequacy is currently preventing their effective use in
performing real interactive group tasks.
ACKNOWLEDGEMENTS
We thank the Masters students (class 1998-99) in the
School of Computer Studies for their participation and co-operation
in the evaluation.
--R
Readings in Groupware and Computer-Supported Cooperative Work
Constructing Common Information Space.
What You Don't Know Can Hurt You: Privacy in Collaborative Computing.
Supporting collaborative information sharing with the World-Wide Web: The BSCW Shared Workspace System
'Kirk Here:' Using Genre Sounds to Monitor Background Activity.
Virtual Working Systems to Support R and D Groups.
Supporting awareness in a distributed work group.
Designing Groupware Applications: a Work-Centered Design Approach
The Knowledge Web - Learning and Collaborating on the Net
Workflow Technology in Computer Supported Co-operative Work
Realizing a
Groupware Toolkits for Synchronous Work.
Workspace Awareness in Real-Time Distributed Groupware: Framework
Communicating about communicating: Cross-disciplinary design of a media space interface
Implementing gesturing with cursors in Group Support Systems.
Groupware: its past and future.
TeamWorkstation: Towards a seamless shared workspace.
A seamless medium for shared drawing and conversation with eye- contact
Cooperative Learning: Increasing College Faculty Instructional Productivity
Towards Principles for the Design and Evaluation of Multimedia Systems.
Design issues for a distributed multi-user editor
The Virtual Science Park.
Experiences in the Use of a Media Space.
Supporting Groupware Conventions through Contextual Awareness.
Usability Engineering.
Building realtime groupware with GroupKit
Workspace Awareness for Distributed Teams.
--TR
TeamWorkStation: towards a seamless shared workspace
Experiences in the use of a media space
Realizing a video environment
Building real-time groupware with GroupKit, a groupware toolkit
Design issues and model for a distributed multi-user editor
MYAMPERSANDldquo;Kirk here:MYAMPERSANDrdquo;
Communicating about communicating
Readings in GroupWare and Computer-Supported Cooperative Work
Usability Engineering
Cooperative Buildings, Integrating Information, Organization, and Architecture
Workspae Awareness for Distributed Teams
What You Don''t Know Can Hurt You
Workspace Awareness in Real-Time Distributed Groupware
Towards Principles for the Design and Evaluation of Multimedia Systems
--CTR
John T. Langton , Timothy J. Hickey , Richard Alterman, Integrating tools and resources: a case study in building educational groupware for collaborative programming, Journal of Computing Sciences in Colleges, v.19 n.5, p.140-153, May 2004
Timothy J. Hickey , John Langton , Richard Alterman, Enhancing CS programming lab courses using collaborative editors, Journal of Computing Sciences in Colleges, v.20 n.3, p.157-167, February 2005
Christina Brodersen , Ole Sejer Iversen, eCell: spatial IT design for group collaboration in school environments, Proceedings of the 2005 international ACM SIGGROUP conference on Supporting group work, November 06-09, 2005, Sanibel Island, Florida, USA | evaluation;common information space;co-operative learning;collaborative tools;Virtual Science Park |
320390 | A problem-oriented analysis of basic UML static requirements modeling concepts. | The Unified Modeling Language (UML) is a standard modeling language in which some of the best object-oriented (OO) modeling experiences are embedded. In this paper we illustrate the role formal specification techniques can play in developing a precise semantics for the UML. We present a precise characterization of requirements-level (problem-oriented) Class Diagrams and outline how the characterization can be used to semantically analyze requirements Class Diagrams. | INTRODUCTION
The Unified Modeling Language (UML) [10] is a standard
language for modeling complex systems from a variety
of views using object-oriented (OO) concepts. The
effectiveness of the UML as a standard is predicated,
among other things, on there being a clear, precise, and
pragmatic semantics for its notations. Informal semantics
(i.e., semantics defined primarily in terms of key
basic concepts that are not explicitly defined) can be
ambiguous (if one is not aware of the intended implicit
interpretations), can be incomplete and can contain in-
consistencies. These problems can lead to confusion
over appropriate use of the language and to the creation
of models that do not clearly communicate their
intent. Furthermore, subtle consequences that can lead
to a deeper understanding of the concepts which, in
turn, can result in more effective use of the language,
can get lost in informal treatments of semantics.
Without a precise semantics, a standard modeling
notation can devolve to a Tower of Babel, effectively
diluting the utility of the standard. Formal specification
techniques can play two significant roles
in the development of a precise semantics for standard
notations. They can be used as:
ffl tools for expressing formal semantics from which
precise natural language descriptions can be ob-
tained, and as
ffl tools that facilitate in-depth analyses of proposed
interpretations.
In this paper a FST consists of a formal notation and
mechanisms for rigorously analyzing statements expressed
in the notation.
In the early stages of the development of a standard
language informal discussions and debates on semantic
issues, coupled with natural language statements
reflecting particular perspectives, can yield valuable insights
into poorly understood semantic concepts. There
is a limit to the value of informal analyses of semantic
concepts, and as the concepts become more varied
and interdependent the need for more formal treatment
of the semantics becomes apparent. The act of formalizing
semantic concepts forces language developers
to closely examine their perceptions of the concepts
and to confront the assumptions underlying their un-
derstanding. Analyses of the resulting formal models
can reinforce confidence in the interpretations, identify
subtle deficiencies in the interpretations, or yield significant
insights by revealing desirable or undesirable
consequences of the interpretations.
Use of FSTs to explore semantic concepts paves the
way for expressing the semantics formally. Having a
mathematically-based definition of UML semantics is
useful in that it provides a reference for resolving questions
about meaning (or consequences of meaning) that
cannot be directly answered by examining natural language
descriptions. This is not to say that the semantics
should be presented in the standard document using
only formal languages. Bridging the gap between mathematical
expressions and the real-world concepts they
model requires much effort and can make communicating
the semantics as problematic as using imprecise and
vague natural language statements. A well-formulated
natural language description of semantics that is derived
from a formal expression of semantics is less likely
to cause confusion than one that is not based on a formal
semantics.
Producing mathematically-based formulations of UML
semantics is not the primary goal of our work. Formal
techniques are used in our work to gain insights and to
explore the consequences of proposed semantics. The
primary goal of our work is to develop precise semantics
for UML notations, expressed in a form that is widely-
understood (e.g., natural language), and that supports
rigorous analyses of the models.
The intent of this paper is to demonstrate the role
FSTs can play in the development of well-defined standard
modeling languages. We illustrate the use of FSTs
for exploring UML semantics by developing precise characterizations
of the basic constructs of problem-oriented
Class Diagrams, and outline how the
characterizations can be used to rigorously reason about
the properties captured by the models. In section 2
we introduce the formal notation Z [11] that we use
to explore the semantics of basic UML Class Diagram
constructs. Z was used in our work because past experiences
indicate that it provides adequate facilities
for modeling OO concepts [6, 7, 4, 5]. In section 3 we
give an overview of our formal characterization of basic
Class Diagram constructs and discuss some of the
issues raised by the characterization. In section 4 we
illustrate how the characterization can be used to support
rigorous semantic analyses of Class Diagrams and
in section 5 we conclude with an overview of our planned
work in this area.
In this section we introduce only the parts of the Z notation
necessary to understand the specifications given
in this paper (see [11] for more details).
The primary structuring construct in Z is the schema.
A schema has two parts: a declaration and a predicate
part. The declaration part consists of variable declarations
of the form w : Type, where w is a variable
name and Type is a type name. The preceding declaration
states that the value represented by w is a member
of the set named by Type (types are sets in Z). The
predicate part consists of a predicate that defines the
relationships among the declared variables.
Types in Z can be either basic or composite. Elements
of basic types (or given sets) are used as the
basic building blocks for more complex elements (ele-
ments with composite types). There are three kinds
of composite types in Z: set types (elements are sets),
Cartesian product types (elements are pairs of values),
and schema types (elements are bindings of values to
variables declared in a Z schema). The usual set operations
(e.g., union, intersection) are available in Z. In
particular, the number of elements in a finite set S is
denoted #S .
Using basic and composite types one can define relations
as sets of pairs in which the first element is from
the domain and the second element is from the range
of the relation. For example, is a Z declaration
stating that R is a relation in which domain elements
are drawn from A and range elements are drawn
from B . A function in Z, for example
is a relation in which each domain element is related
to exactly one range element. A sequence in Z is a
finite partial function from natural numbers to the sequence
element type. There are a number of operators
defined for relations and functions in Z. In this paper
we will use the operators described below. In what
follows R is a relation declared as R : A
b2)g.
Returns the set containing the elements
in A that are related to elements in B . This set
is called the domain of R. In this case domR =
a3g.
ffl ran R : Returns the set containing the elements in
B that are related to elements in A. This set is
called the range of R. In this case ran
is a subset of A: Returns the subrelation
of R that has a domain consisting of those
elements of domR that are also in S . For example,
b2)g. Note
that the a4 element in S is ignored (because it is
not in the domain of R).
ffl RBT where T is a subset of Returns the subrelation
of R that has a range consisting of those
elements of ran R that are also in T . For example,
b1)g.
ffl R(j fSg j) where S is a subset of A: Returns the
set of all elements in the range that are mapped
to elements in the set S in R. For example
Returns the inverse of R.
Z schemata are used to model both structural and
dynamic properties of systems. A schema that captures
the structural properties will be referred to as a state
schema, and a schema that captures the dynamic properties
will be referred to as an operation schema. In a
state schema the components of a system are declared in
the declaration section and the static properties of the
system are defined in the predicate part. An example
of a state schema is given below:
TelService
set of service subscribers
directory
subscribers with telephones
dom directory ' subs
only subscribers can have telephones
in this system
a telephone can be associated with at most
subscriber
In the above, SUBS and TELEPHONES are Z basic
types representing registered subscribers and registered
telephones, respectively. TelService is a schema specifying
the state of a telephone system consisting of a set
of subscribers, subs , and a look-up facility, directory .
The text in normal font that appears directly below a Z
expression in the schema is a natural language description
of the Z expression (i.e., comments). The predicate
part of the schema contains two conjuncts (logical expressions
written on separate lines in the predicate part
of a schema are "anded" to form a single predicate).
An operation schema defines the relationship between
the state at the start and at the end of an oper-
ation's execution. The declaration part of an operation
schema declares variables representing the before and
after states (as defined in a state schema), inputs, out-
puts, and any other variables needed to define the pre
and postconditions. The predicate part of the schema
defines the relationship between the before and after
states. The following conventions are used for variable
names in operation schemata:
unprimed variable (e.g., w) - value of variable
before operation execution;
primed variable (e.g., w 0 ) - value of variable
after operation execution;
variable ending in '?' (e.g., w?) - an input to
the operation; and
variable ending in '!' (e.g., w !) - an output
from the operation.
\DeltaS denotes a possible change in state defined
by schema S (includes invariant properties of
primed and unprimed state variables as defined
in the schema S ).
\XiS denotes that the state S does not change
(includes invariant properties of primed and
unprimed state variables as defined in the
schema S and a predicate stating that the
after state is identical to the before state).
An example of an operation schema is given below:
AddSub
\DeltaTelService
subscriber to be added
sub? 62 subs
input is not a current subscriber
input is a subscriber at end of operation
directory
directory is unchanged by this operation
This operation schema defines an operation that adds
a new subscriber to the telephone system whose state
is defined by TelService. The predicate part consists of
three conjuncts describing the relationship between the
before and after states of the system determined by this
operation.
3 A FORMAL CHARACTERIZATION OF REQUIREMENTS
CLASS DIAGRAMS
The process that we used to explore the semantic foundation
of static UML models can be described as follows
Informal Analysis: A systematic and careful reading
of the semantic and notation sections of the document
was carried out in this phase. The objective
was to gain an initial understanding of the
concepts. This phase revealed some of the more
obvious deficiencies in the UML document (e.g.,
the problem associated with the definition of association
end multiplicity properties discussed later
was identified at this stage).
As stated above, informal analyses are
inadequate when concepts become more varied and
interdependent. More subtle deficiencies that can
arise when concepts are intertwined to form more
complex structures are more likely to be identified
through rigorous analysis of the concepts. The formalization
phase is concerned with expressing the
concepts formally. This must be done in a manner
that supports the analysis that will take place in
the next phase. The formalization phase provides
yet another opportunity for identifying deficien-
cies. For example, one can identify (1) omissions
when there is not enough information to complete
a formalization, (2) ambiguities when the information
provided can be formally interpreted in different
ways, and (3) inconsistencies when information
results in an inconsistent formalization.
Rigorous Analysis: Rigorous analysis involves manipulating
the formal expressions to derive consequences.
Given the formalizations we explored their consequences
by posing "what if" questions and manipulating
the formal expressions to determine an-
swers. We then tried to determine whether the answers
were desirable or undesirable consequences
of the meaning characterized by the formalization.
For example, given a formalization of the frozen association
property one can ask the question "What
if the object at the frozen is deleted?" and determine
whether the derived answer is desired or
not by examining the OMG-UML document. We
also identified desirable properties of UML constructs
in the OMG-UML document that were not
explicitly expressed in the formalization and attempted
to prove that these were consequences of
the formalization. If this could be done then this
heightened our confidence in the formalization. If
it could not be done then we examined the formalization
to identify any problems in the expressions.
This could happen if (1) an error was made in the
formalization of the concept, (2) the concept being
formalized was not well-understood, or (3) the
relevant descriptions (the descriptions used to develop
the formalization and the description of the
property being checked) in the OMG-UML document
are inconsistent.
A Class Diagram is a conceptual model of a system
that is expressed in terms of classes and the relationships
among them. At the requirements level a Class
Diagram reflects a problem-oriented structure in which
classes represent problem domain concepts. At the design
level, a Class Diagram reflects a solution-oriented
structure. The interpretation we use in this paper was
developed for problem-oriented Class Diagrams. In the
remainder of this paper a requirements-level Class Diagram
is referred to as a CD. This paper outlines our
formalization of CDs. A more detailed account of our
formalization can be found in a technical report [3].
In our interpretation, a CD is a characterization of
valid, externally observable system states. An externally
observable system state is a structure consisting
of all the system objects and links that can be observed
by an external agent at some point in time. We refer to
the system states that possess the properties expressed
in a CD as configurations. The semantic domain for
CDs is a collection of sets of configurations, and the
meaning of a CD is a set of configurations.
The instance-based semantics of CDs we use is consistent
with the UML object interpretation of a Class
Diagram. The OMG UML document [10] states (pg.
The purpose of a model is to describe the
possible states of a system and their behav-
ior. The state of a system comprises objects,
values, and links. The state of a system
is a valid system instance if every instance
in it is a direct instance of some element in
the system model and if all of the constraints
imposed by the model are satisfied by the instances
References to "model" and "system model" in the above
quote relate to CDs, and a "valid system instance" is a
configuration in our interpretation.
CD constructs (e.g., classes and associations) may
possess two types of properties: static and dynamic.
Static properties are used to define the structure of elements
represented by the construct, and dynamic properties
are used to constrain how the elements are manipulated
by operations. An example of a static property
is the multiplicity of a class. If a class has a multiplic-
ity, then in any valid state (configuration)
there can be no less than m and no more than p objects
of the class. An example of a dynamic property is
the notion of addonly attributes. An addonly attribute
is one that can hold more than one value, but once a
value is added it cannot be removed. In our analysis,
static properties of CD constructs are expressed in Z
state schemata and dynamic properties are expressed
in Z operation schemata.
In this section we focus on specifying the basic static
and dynamic properties of general associations, compositions
(strong aggregation), aggregation (weak ag-
gregation), and generalization structures. For associations
we consider only multiplicity and changeability
properties in this paper. The navigability and visibility
properties are considered not relevant to CDs (i.e., they
reflect design considerations that should not appear in
requirements models). Ordering and other secondary
association properties are not considered in this paper.
3.1 Specifying Classes in Z
The set of all objects belonging to a class in a configuration
is called the configuration set of the class, and
the objects in the set are referred to as configuration
objects of the class. In isolation, a class can be interpreted
as the set of all its possible instances, called the
object space of the class, but in the context of a CD a
class defines its configuration sets (a subset of its object
space). This interpretation of a class implies that constraints
associated with a class in a CD are constraints
on their configuration sets. For example, class multiplicity
restricts the number of objects that can be in a
configuration set of the class. This is consistent with
the use of class constraints that we have encountered in
UML and other OO modeling notations. In Z, the object
space of a class that is not a subclass is represented
as a basic type.
In Z, the static properties of a class are defined by
a state schema. For example, a class, Cl , with a multiplicity
attributes defined by another
schema CLattribs , and with operations identified by elements
of the type OpIds (a user-defined enumeration
expressed as Z free types) is specified in Z as follows:
Object space of Cl
Instance Schema
configuration set of Cl
cl attribs : CL! CLattribs
maps objects to attribute values
cl ops
maps objects to operation references
multiplicity constraint
dom cl
dom cl ops = cls
The behavioral specification of an operation is given
by a schema that maps operation references (elements
of OpIds), the current state of class objects (attribute
values and links), and parameter values (if any) to the
set of all possible effects of the operation. An effect
is defined as resultant state of the system (in our for-
malization, operations are viewed as atomic at the requirements
level). We do not detail the formalization
of attributes and operations in this paper. See [3] for a
more in-depth discussion.
3.2 Formalizing binary associations
In a configuration, a binary association is interpreted
as a mathematical relation between the configuration
sets of the associated classes that satisfies the stated
constraints on the association. A binary association is
simply a set of object pairs, where each pair is called a
link.
A careful reading of the UML document revealed
that the terms "multiplicity", "range", "multiplicity
specification" and "multiplicity property" are used loosely.
The semantics part of the document does not mention
"multiplicity specification" in the description of association
semantics (page 2-57). One gets the impression
from this section that an association end is associated
with a single "multiplicity" which is defined as a "range
of nonnegative integers" (use of the term range often
implies a contiguous nature, this does not seem to be
the case here). The notation part of the OMG-UML
document gives a clearer picture of intent. It states
that each association end has a multiplicity specification
that defines a set of multiplicities ("a subset of the
open set of nonnegative integers", page 3-68), and also
gives examples of association-ends that are associated
with multiple multiplicities.
Our understanding of multiplicity can be expressed
as follows:
An association-end has a multiplicity speci-
fication, where a multiplicity specification is
a set of multiplicities, and a multiplicity is a
nonnegative integer range.
Using the above we developed the following formalization
of the multiplicity property. Consider the association
Rel shown in the diagram below,
instance
schema
instance
schema
Rel
u.v,s.t
where m::n, p::q , u::v and s ::t are ranges. The static
properties of the association Rel are specified in Z as
follows:
Rel
rel
domRel ' cl 1s
ran Rel ' cl 2s
cl
cl 1s ffl s - #(Rel(j fxg j)) - tg
From the above formalization an interesting interplay
between association-end multiplicity properties and
a class' multiplicity property can be identified. Consider
the case where m and p are greater than 0 and m - p.
Then the expression:
cl
implies that if there is at least one CL2 object in the
system, then there must be at least m CL1 objects in
the system (or else the predicate is not true). If the multiplicity
of the CL2 class is x ::y , where x ? 0, then the
multiplicity of the CL1 class is constrained in that the
lower bound must be greater than or equal to m. We
are currently using the above formalization to identify
categories of unnecessarily permissive association-end
and class multiplicities. The characterizations of the
categories that we develop can be used in CASE tools
to identify multiplicity constraints that are too permis-
sive. In the diagram below, the multiplicity of CL1 is
too permissive: at any time there will be at least 15
instantiated CL1 objects in existence. A more appropriate
multiplicity for CL1 is 15:: .
Rel
3.2.1 Dynamic Properties of Associations
An association-end can be changeable, frozen, or ad-
donly. A changeable association end is one in which no
restrictions are placed on how links are set up between
objects of the associated classes (the UML default).
If an association-end is frozen then the objects at the
frozen association end are referred to as target objects,
and those at the opposite end are referred to as source
objects (e.g., see Fig. 1). The OMG-UML notion of
frozen association ends is expressed as follows (pg. 3-
The property ffrozeng indicates that no links
may be added, deleted, or moved from an
object (toward the end with the adornment)
after the object is created and initialized.
We interpreted this to mean that once a source object
is created then no additional links can be added
from the source object to target objects, and none of
the links to target objects created during the creation
of the source object can be deleted. A consequence of
this is that if no links are created between the source object
and target object when the source object is created
then no links to target objects can be created during
the lifetime of the source object.
In formalizing the UML description of frozen association
ends we had to consider the question of what
happens when an object at the frozen association end
(a target object) is deleted. We could not find any answers
in the OMG-UML document. Within our group
most felt that an appropriate answer is that the link is
also deleted. But others felt that this violated the constraint
that no links can be removed from the source
end during the lifetime of the source object (the emphasized
text states their interpretation of the phrase "after
the object is created and initialized" in the OMG-UML
description of frozen properties). For example, consider
instance
schema
instance
schema
source target
Class Diagram
Object Structure
assoc
B_Inst
A_Inst
Figure
1: An example of a frozen association end
the object structure shown in Fig. 1, in which an A object
is linked to three B objects (b1; b2; b3). The links
between the A object and the B objects are frozen. It
is clear that deletion of the A object would result in the
destruction of the links (but not necessarily the target
objects), but what happens if one of the B objects is
deleted before its linked source object is deleted is not
discussed in the UML document.
We could not find any additional information in the
OMG-UML document to resolve this ambiguity so we
defined two flavors of the frozen property: weak and
strong frozen properties. If assoc is an association that
has a strong frozen property at the association end connected
to B (see Fig. 1), then the deletion of a B object
in the object structure shown in Fig. 1 is not allowed until
after the associated A object is deleted. This shade of
frozen associations forces a death dependency between
source and target objects: A linked target object can be
deleted only after all its (frozen) linked source objects
are destroyed (a target object can be linked to more
than one source object if permitted by the associationend
multiplicity specification). If assoc has a weak
frozen property at the B end, then a linked B object
can be deleted independently of its linked source object,
resulting in the deletion of the corresponding link. In
this case an assoc link is frozen as long as either linked
object exists.
Let Assoc be a schema defining the static properties
of the association assoc shown in Fig. 1:
Assoc
A inst
inst
The A configuration set, as , is defined in the schema
A inst and the B configuration set, bs , is defined in the
inst . The two variants of the frozen property
are expressed below:
Strong Frozen Property
FrozenDepAssoc
\DeltaAssoc
as
Weak Frozen Property
FrozenIndAssoc
\DeltaAssoc
(as
In the above schemata, as 0 C assoc is a domain restriction
that returns the part of the assoc relation that
involves only the domain elements in as 0 (the configuration
objects in the after state). The symbol B represents
range restriction and when applied to a relation
R and a set of range elements S , as in RBS , it returns
the part of R that involves only the range elements that
are in S (see section 2).
In the UML an association end is said to be addonly
if links can be added to the source object, but none of
the previously created links to the target objects can
be deleted. Again, we define two shades of the addonly
property which are formally characterized below for the
association shown in Fig. 2:
\DeltaAssoc
WeakAdd
\DeltaAssoc
In StrongAdd , once an assoc link is created between
a and b it cannot be removed until the a element is
instance
schema
instance
schema
source target
assoc
B_Inst
A_Inst
Figure
2: An example of an AddOnly association end
destroyed. Consequently, the b element cannot be destroyed
until after the a element is destroyed. In WeakAdd
a linked b object can be deleted before its source a object
is deleted (in which case the link is deleted).
3.3 Formalizing aggregation
In the UML an aggregation corresponds intuitively to
a whole/part association. Strong aggregation is called
composition or a "composite aggregate" in the UML,
and infers strong "ownership" of the parts by the whole.
Weak aggregation infers weak "ownership". Distinguishing
among (general) association, weak aggregation and
composition requires formalizing of the UML notion of
"ownership".
3.3.1 Formalizing simple composition and weak ag-
gregation
The UML description of composition implies that the
multiplicity specification at the whole end is a singleton
set that consists of a single multiplicity that is either
1::1 or 0::1 (OMG-UML, pg. 2-57). A composition
can be mathematically modeled as a function from the
component configuration set to the whole configuration
set. The form of the Z schema that captures static
properties of a composition is the same as that for an
association except that the composition relationship is
modeled as a function. Structurally, the functional relationship
between a part and its whole is a characteristic
of a composition. Note that some associations that
are not intended to be compositions can be modeled as
functions, so this characteristic does not distinguish a
composition from a general association.
The weak form of aggregation weakens the functional
relationship between components and their aggregates
to a general relation (allowing for the sharing of compo-
nents). Weak aggregation between a component class
and an aggregate class is thus structurally equivalent
to a general association between the classes. One would
Comp_1
Agg
Comp_21.*
1.3 *
Figure
3: An encapsulating composition
then expect that the distinguishing features of weak and
strong aggregation would appear in the form of dynamic
properties. As we discuss later, our formalization of the
dynamic properties of weak aggregation did not uncover
any distinguishing features.
3.3.2 Formalizing encapsulating composition
In the UML a composition can "contain" associations
(UML-OMG document, pg. 3-75).
The meaning of an association in a composition
is that any tuple of objects connected
by a single link must all belong to the same
container object.
We refer to a composition that "contains" associations
as an encapsulating composition.
If comp1 and comp2 are two component classes of
a whole class whole (where comp1map maps comp1
objects to their whole objects and comp2map maps
comp2 objects to their whole objects), and comp1 and
comp2 are related via an encapsulating association rel ,
then the property constraining the linking of comp1 and
comp2 can be expressed as follows:
where first(r) returns the first element in the pair r
(a comp1 object) and second(r) returns the second element
in r (a comp2 object).
For example, the aggregation structure shown in Fig. 3
is formalized in the following schema (in what follows,
the source of rel , defined in Rel Sc, is COMP 1 and the
target is COMP 2):
AggConfig
Agg Inst
Instance Schema for Agg
Rel Sc
Association Schema for Rel
multiplicity properties
We also developed what we consider useful variations
of encapsulating associations, by weakening and
strengthening the OMG-UML property. These variations
are discussed in [3]. A particularly useful stronger
form of encapsulation that we developed requires that
objects from classes involved in an encapsulating association
must be linked if they occur as parts of an
aggregate. This means that the parts must appear as
pairs in an aggregate structure. For example, in a clinical
laboratory system a test request can be modeled
as a whole object consisting of test and sample object
pairs (i.e., tests without samples, and vice versa, are
not allowed in a test request).
3.3.3 Formalizing the dynamic aspects of aggrega-
tion
It is not clear to us what "coincident lifetimes" (OMG-
UML, pg. 3-71) means in the OMG-UML document.
A literal translation would result in the following inter-
pretation: The parts are created at the same time the
whole is created and they are destroyed when the whole
is destroyed (consequently, parts are frozen for the life-time
of the whole). This interpretation contradicts the
intent that a whole may "remove a part and give it to
another composite object" (OMG-UML, pg. 2-57) and
requires that all parts be associated with whole objects
(i.e., disallowing multiplicities of 0::1 at the whole end).
We propose the following deletion property for com-
positions: If a whole in a composition is deleted then all
the parts that are currently associated with the whole
are deleted. This property allows for the removal of
parts before the deletion of the whole (others have proposed
similar properties, e.g., see [2, 9]).
Given a composition between a component Comp
and a whole Agg that is defined by a schema Comp Agg ,
the deletion property for compositions is specified as
follows:
\DeltaComp Agg
whole objects to be deleted
delaggs? ' aggs
objects must be in start configuration
aggs
objects are deleted
components are removed
In the previous section we noted that weak aggregation
is structurally equivalent to general association.
Unfortunately, the UML document does not provide
enough information to make a distinction between the
two concepts from either a structural or behavioral per-
spective. There are other ways of distinguishing weak
aggregation from general binary association that we are
currently exploring through formalization [8, 9].
3.4 Specifying Generalization Structures
The type space of a specialization hierarchy can be
viewed as a carving up of the root superclass object
space into subsets, where each subset is the object space
of a subclass. In a configuration, the subclasses of a superclass
are subsets of the superclass configuration set.
The static properties of a subclass, Sub, of a super-class
characterized by an instance schema Super , that
defines superclass configurations, supers , and for which
the root superclass elements are drawn from the set
ROOT , are defined in the following schema:
Super
inheritance of superclass properties
subclass config. set and object space
The predicate states that the configuration set of the
subclass are precisely those objects in the configuration
set of Super , supers , that are in the object space of the
subclass (SUB ).
How a superclass configuration set is divided into
subclass configuration sets can be constrained as follows
ffl Overlapping and Disjoint Subclasses: A set of sub-classes
is said to be disjoint if there are no objects
that are instances of more than one subclass
in the set, otherwise the set is said to consist of
overlapping subclasses. For subclass object spaces,
the disjoint property is stated
as followed in Z: disjointhSUBS It
is also possible to specify the disjoint property
on subclass configuration sets (and allow object
spaces to overlap). We suspect that the need for
this type of constraint does not occur often. The
OMG-UML document does not discuss whether
the disjoint property applies to object spaces or
configuration sets.
Abstract
(Exhaustive) Superclasses: An abstract
superclass is one in which each superclass configuration
object is also a configuration object of at
least one depicted subclass. A superclass that can
have configuration objects that are not configuration
objects of any depicted subclass is said to
be non-abstract. The property that a configuration
set supers is abstract with respect to its sub-class
configurations is formalized in Z as follows:
supers . This property can
also be stated for superclass object spaces. Again,
the OMG-UML document does not make it clear
whether the property applies to object spaces or
configurations.
A schema characterizing the static properties of a
generalization hierarchy is created by including the schemata
of the leaf subclasses in the declaration part (together
these schemata define the static properties of all the
classes in the hierarchy) and predicates expressing disjoint
and abstract properties in the predicate part. An
example of a formalization of a generalization structure
will be given in the next section.
The precise characterization of a CD construct can be
used to infer structural properties of constructs [1]. Using
inference mechanisms (e.g., proof techniques) one
can explore consequences of a particular interpretation.
The need to establish properties can also arise during
application development out of a need to show that
a model conforms to certain requirements, or out of
challenges posed by reviewers of the models. Rigorous
analysis may also be required to tackle questions for
which answers are not explicitly given in the model.
Consider the top model in Fig. 4. This model involves
a composition between the superclass and
, and a specialized composition between a
subclass and SampleSlot . A specialized association (com-
position) is one that involves a subclass, Sub, and another
class, A, such that there exists an association with
the same name between a class, Super , that is an ancestor
of Sub, and the class A. The multiplicities on
an association at the superclass level are constraints on
the links that can be formed between objects of the
superclass and objects of the associated class. Given
that instances of subclasses are also instances of su-
perclasses, the multiplicities at the superclass level also
constrain the links that can exist between subclass objects
and objects of the associated class. A modeler can
further restrict the links at the subclass level by explicitly
stating multiplicities on the association at the sub-class
level. These multiplicities must be consistent with
the multiplicities given at the superclass level. This is
the case for the specialized composition in the top model
of Fig. 4.
In the top model of Fig. 4 one may ask whether an
aggregation between AdvanceAnalyzer and SampleSlot
is implied by the aggregation between Analyzer and
SampleSlot and if so what are the constraints on the
aggregation. An informal analysis of the CD leads to
the conjecture that is expressed in UML terms in Fig. 4.
The diagram expresses the conjecture that the aggrega-
INFERS
disjoint
1.*
RegularAnalyzer
Analyzer
RegularAnalyzer
Analyzer
AdvanceAnalyzer
AdvanceAnalyzer
Figure
4: UML Inference Diagram
tion at the Analyzer level implies that an aggregation
also holds between AdvanceAnalyzer and SampleSlot ,
where the multiplicity at the SampleSlot end is 1::
and the multiplicity at the AdvanceAnalyzer end is 0::1.
The informal reasoning that produced this conjecture
follows:
An advanced analyzer is an analyzer, hence
it can be associated with one or more sample
slots. We note that this is true in the
absence of further information that can further
constrain the number of slots associated
with an advanced analyzer. It is known that
regular analyzers have sample slots and that
regular analyzers are distinct from advanced
analyzers (the respective classes are disjoint).
From this we can conclude that a sample slot
either belongs to a regular analyzer or to an
advanced analyzer, but not to both at the
same time. This implies that a sample slot is
either associated with an advanced analyzer
or it is not.
The specification characterizing the CD at the top
of the "inference" diagram in Fig. 4 is given below:
Slot
Analyzer
analyzers
AnalyzerConfig
Analyzer
Slot
object spaces for Advanced and
Regular Analyzers
configuration sets for Advanced and
Regular Analyzers
composition mappings
Advanced Analyzer config objects are precisely
those config objects of Analyzer
that are in the object space of Advanced Analyzer
Regular Analyzer config objects are precisely
those config objects of Analyzer
that are in the object space of Regular Analyzer
are disjoint
analyzers
Superclass is abstract
super agg 2 slots !! analyzers
Superclass composition
sub agg 2 slotsae regs
Subclass composition
Subclass composition specializes superclass
composition
The conjecture that a sample slot is a part of zero or
one advanced analyzer can be formally stated as follows:
6j An outline of the
proof of the above conjecture is given below:
ffl From the definition of super agg we get:
an
ffl Given that analyzers = advs [ regs , the above is
equivalent to:
an which, in turn, is equivalent
((9 an : advs ffl super
-
ffl Given that regs " the above implies:
6j End of Proof
Sketch
The conjecture that an advanced analyzer can have
one or more sample slots (in the absence of information
that can further constrain this property) can be
expressed as follows:
The proof outline is similar to the above proof outline
and so is not given in this paper.
A precise semantics can also lead to the development
of transformation rules that can be employed within
CASE tools to (1) transform complex diagrams to semantically
equivalent simpler diagrams, and to (2) transform
diagrams into semantically equivalent diagrams in
which implicit properties are made explicit (the inverse
is usually a simplification) . In [1] a small set of transformations
are discussed. We have since enlarged this
set with transformation rules for structures involving
specialized associations. In the remaining part of this
section we illustrate some of the rules we developed.
In defining the transformation rules for promoting
and demoting specialized binary associations in generalization
structures, a distinction is made between complete
and incomplete CDs. A complete CD is one that
states all the intended properties (i.e., the structure is
not intended to be extended). An incomplete CD is
one that does not state all the intended properties (i.e.,
the intent is that the structure will be extended). In
general, one can be precise when reasoning about complete
CDs. Constraints on possible extensions can be
inferred from incomplete CDs as will be demonstrated
in this section. For example, consider the CD in Fig. 5.
If this CD is considered complete, then objects of D are
linked only to objects of C . If the CD is incomplete,
then there is the possibility that objects of D can be
linked to objects of A that are not C objects.
A
Figure
5: Complete vs. incomplete CD
The rules presented in this section have all been
proven (in a manner similar to above). Space does not
allow us to present proofs for the rules we give here. Instead
we justify the rules informally. The informal presentation
of the rules is based on insights gained while
carrying out the proofs.
Demotion Rule 1: Association Demotion in a Complete
CD
Fig. 6 gives the rule for the demotion of a binary association
in a complete CD. In the left hand side (LHS)
model, elements of class A are linked to elements of
class D. Given that objects of a subclass are also objects
of its superclass, then elements of the subclass can
be linked to objects of class D. The CD is complete,
meaning that the association between subclass objects
and objects of the class D are constrained only by the
superclass constraints. Given that a superclass object
is associated with p::q D objects, then each subclass
object is also associated with p::q D objects, as shown
in the right hand side (RHS) diagram. In turn, a D
object is associated with m::n superclass objects. Some
(including none) of the superclass objects can be B objects
and some (including none) can be C objects. The
number of B (C) objects associated with a D object
can not exceed n.
disjoint
disjoint
Transformation
p.q
m.n
p.q
p.q
Rel Rel
RelD
p.q
m.n
A(abs) A(abs) D
Figure
Demotion rule 1
Note that the inverse of this rule is a simplification:
the LHS diagram is a simplification of the RHS diagram.
Demotion Rule 2: Association Demotion in an Incomplete
CD
The LHS model in Fig. 7 is incomplete, meaning that
additional information that can further constrain the
association Rel when moved to the subclass level may
be missing. In this case we cannot be precise about the
cardinality at the subclass end of the association, but
we can define the boundaries of the missing information
pertaining to the association Rel. For example, at the
we know that the cardinality has to be a subset
of 0::n. We also know that the union of the associations
at the subclass level must be equal to the association
at the superclass level. If this is not the case, then the
properties expressed at the superclass level will be inconsistent
with the properties expressed at the subclass
level.
disjoint
disjoint
Transformation
m.n p.q
Rel
p.q
Rel
m.n
p.q
p.q
{s1?.s2? is a
subset of 0.n}
{t1?.t2? is a subset
of 0.n}
Figure
7: Demotion rule 2
We indicate an unknown value by placing a ? after
the value reference. In the RHS model s1?; s2? and
t1?; t2? are unknown cardinalities (completion of the
model requires that these values be supplied). Constraints
on unknown values are expressed as annotations
enclosed in 'f','g' in the CD.
Promotion Rule 1: Association Promotion in a Complete
CD
The LHS CD in Fig. 8 is complete, meaning that the
depicted association is only between objects of C and D.
If this association is promoted, then only the C objects
of A's object space are mapped to D objects, the others
are not. The assumption here is that the object space
of B is not a subset of C (i.e. B is not a subclass of
C). If the superclass is non-abstract this transformation
still applies. The constraint on RelA in the RHS CD
stipulates that objects of the superclass A are linked to
either p::q or zero D objects.
The inverse of this transformation is a simplification:
the LHS diagram is a simplification of the RHS diagram.
Promotion Rule 2: Association Promotion in an Incomplete
CD
The LHS CD in Fig. 9 is incomplete which means that
the association shown can possibly involve other objects
Transformation
m.n
RelA
p.q
m.n
p.q
m.n
Figure
8: Promotion rule 1
of A that are not C objects. The cardinalities of the
inferred association at the superclass level in the RHS
model cannot be determined precisely, but we can infer
that the cardinality at the A end must be a subset of
m:: (a D object is linked to at least m C objects). The
cardinality at the D end is a subset of 0:: .
p.q
m.n
Transformation
p.q
m.n
A
A
{s? is a subset of m.* that includes m.n}
{t? is a subset of 0.* that includes p.q}
RelA
Figure
9: Promotion rule 2
5 CONCLUSION
In this paper we provide a glimpse of the role formal
techniques can play in developing the UML. The primary
goal of our work is to provide support for the
UML that enables its use as a rigorous modeling lan-
guage. This can be done by defining precise semantics
for UML notations and developing mechanisms that allow
developers to rigorously analyze the UML models.
This work is being carried out as part of a collaborative
effort to define a precise semantics for the UML that
is undertaken by the precise UML (pUML) group. The
approach taken by the pUML group is to use formal
techniques to explore and define appropriate semantic
foundations for OO concepts and UML notations, and
to use the foundations to develop rules for transforming
models to enable rigorous analysis. More information
about the pUML effort can be found on the following
website:
http://www.cs.york.ac.uk/puml/
We are currently working on expressing the Z formalizations
of CDs in the Object Constraint Language
(OCL) [10]. This exercise should provide us with insights
into the ability of the OCL to formally express
UML semantics.
In summary, we argue that mathematically-based
techniques can and should be used to rigorously analyze
the concepts upon which the semantics of a standard
notation are based.
Acknowledgements
The work presented in this paper benefited from interactions
with MIRG (Methods Integration Research
Group) and the pUML group. In particular, I would
like to thank Jean-Michel Bruel, Andy Evans, Brian
Henderson-Sellers, and Bernhard Rumpe for the discussions
we had around some of the issues presented
in this paper. This paper is based on work supported
by the National Science Foundation under Grant No.
CCR-9803491.
--R
Reasoning with UML class diagrams.
Roles for composite objects in object-oriented analysis and design
Rigorous object-oriented modeling: Integrating formal and informal notations
Towards rigorous analysis of fusion models: The mirg experience.
Using Z as a specification calculus for object-oriented systems
Specifying and Interpreting Class Hierarchies in Z.
Information Modeling: An Object-Oriented Approach
The Object Management Group (OMG).
The Z Notation: A Reference Manual.
--TR
The Z notation
Information modeling
Roles for composite objects in object-oriented analysis and design
Rigorous Object-Oriented Modeling
Using Z as a Specification Calculus for Object-Oriented Systems
Reasoning with UML Class Diagrams
--CTR
Ji-Hyun Lee , Cheol-Jung Yoo , Ok-Bae Chang, Analysis of object interaction during the enterprise javabeans lifecycle using formal specification technique, ACM SIGPLAN Notices, v.37 n.6, June 2002
Ana Mara Funes , Chris George, Formalizing UML class diagrams, UML and the unified process, Idea Group Publishing, Hershey, PA,
Gail-Joon Ahn , Hongxin Hu, Towards realizing a formal RBAC model in real systems, Proceedings of the 12th ACM symposium on Access control models and technologies, June 20-22, 2007, Sophia Antipolis, France | UML;requirements class diagrams;precise semantics |
320391 | A language for specifying recursive traversals of object structures. | We present a domain-specific language for specifying recursive traversals of object structures, for use with the visitor pattern. Traversals are traditionally specified as iterations, forcing the programmer to adopt an imperative style, or are hard-coded into the program or visitor. Our proposal allows a number of problems best approached by recursive means to be tackled with the visitor pattern, while retaining the benefits of a separate traversal specification. | Introduction
The visitor pattern [GHJV95] allows a programmer to write
behavior that traverses an object structure without embedding
assumptions about the structure into the computational
code. This is a result of separating navigational code
from code that performs the computation. In most versions
of the visitor pattern, the exact sequence in which parts of
an object are visited is hard-coded, either into the object or
into the visitor.
We refer to the sequence of object visits that the navigational
code produces as the traversal of the visitor, and the
ideal object structure as the class graph of the application.
Where the object structure only contains concrete objects,
the class graph may contain contain abstract classes and
interfaces. Since the code implementing the traversal necessarily
has a detailed encoding of the class graph, binding
the visitor and the navigational code together harms maintainability
and reuse.
An alternative to hard-coding is to separate the navigation
from not only the visitor's behavior, but the visitor
This work has been partially supported by the Defense Advanced
Projects Agency (DARPA) and Rome Laboratory, under agreement
number F30602-96-2-0239. The views and conclusions herein are
those of the authors and should not be interpreted as necessarily
representing the official policies or endorsements, either expressed or
implied, of the Defense Advanced Research Project Agency, Rome
Laboratory, or the U.S. Government. This research also supported in
part by the National Science Foundation under grants CCR-9629801
and CCR-9804115.
To appear in the Proceedings of OOPSLA '99.
as a whole. Iterators [GHJV95] and strategies [Lie96] have
been suggested for this purpose. Both present the object
structure as a sequence of visit events, thereby linearizing
the traversal and enforcing an imperative style to program
the visitor.
We propose a domain specific language to express recursive
traversals of object structures. By not linearizing the
object structure being traversed, our language allows recursive
computations to be expressed using the visitor pattern.
A range of problems are succinctly solved by synthesizing
the result from sub-results. By making value-passing
explicit in our specification language, we are able to naturally
express recursive algorithms.
Our design allows an object to be used with many different
traversals, and a traversal to be used with many different
visitors. Factoring behavior into class graph, traversal, and
visitor makes a program more robust under evolution of the
class graph. A change in the class graph is likely to require
only local maintenance to the traversal definitions. Often,
the visitors need not be changed.
Some features of our approach are:
1. allowing the programmer to specify the order in which
the parts of an object are traversed,
2. allowing the current node to be visited several times
in the course of a traversal,
3. allowing the traversal to control its behavior dynamically
4. providing a convenient mechanism for values to be returned
from visits,
5. providing a convenient mechanism for iterating a traversal
over collections of objects, and
6. allowing traversals to be named for re-use.
Traversal specifications are translated into Java classes,
and are thus first class values that can be stored in variables
or passed as arguments.
Related Work
Gamma et al. [GHJV95] present the visitor pattern, which
proposes to separate behavior from navigation. The former
is put into a separately compiled class and the latter is put
into the class diagram. The intended use of the visitor pattern
is to create software robust under the addition of new
behavior, but not under changes to the class structure. In
many cases, the navigational aspect is trivial, always covering
the entire object graph.
Lieberherr [Lie92, Lie96] argues that a complete traversal
of the object graph is often not what is needed, and suggests
that the visitor pattern should be modified so that the navigational
aspect is packaged up in a concise specification. By
gathering all the information in one place, the behavior of a
traversal and visitor may be conveniently analyzed without
hunting through the program for it. Lieberherr proposes
specifying the navigational aspects using a navigation language
that leaves unimportant details unspecified. These
details are inferred by the system at compile time from information
in the class diagram. The benefit of Lieberherr's
under-specified navigation language is that for many changes
to the class diagram, the meaning inferred by the system will
be unchanged. The goal is that none of the program should
require modification; Lieberherr calls this system Adaptive
Programming.
A pitfall of this approach is "surprise paths." A surprise
path occurs when a change to the class diagram leads the
system to infer a path that was not intended. These surprise
paths can alter the intended semantics of the program, and
by virtue of being surprising, can be difficult to locate. Most
importantly, by leaving unimportant details unspecified, the
navigational aspect language is a poor match for recursive
programming. Recursive programming seems to require that
we are explicit at all stages how the result is propagated and
computed. This is obviously in conflict with the desire to
leave details unspecified.
Our proposal maintains the separation between navigation
and behavior, but sacrifices some degree of compactness
for locality and robustness, meaning that changes to the program
induced by changes in the class definitions should be
localized and easily implemented. We aim for least surprise,
whereas Adaptive Programming aims at least modification.
Current implementations of Adaptive Programming require
the class diagram to be recompiled for each modification
to a navigational aspect [LO97]. Our organization also
allows navigational aspects to be added without recompiling
the class diagram.
3 Traversal Specification
3.1 An example
To motivate our discussion, we will start with a very simple
program written in Java. The example we choose is to sum
all the nodes in a tree. The class diagram of our example is
shown in fig 3.1.
A Java program in the visitor style looks like:
abstract class Tree -
abstract void visitAllLeaves(TreeVisitor tv);
class Node extends Tree -
Tree left,right;
void visitAllLeaves(TreeVisitor tv) -
right.visitAllLeaves(tv);
class Leaf extends Tree -
int val;
void visitAllLeaves(TreeVisitor tv) -
Top
int
tree
right
left
Tree
val
Figure
1: Class diagram for simple tree
Traversal ::= traversal TraversalName =
TraversalEntry TraversalEntry*
TraversalEntry
Action ::= traverse PartName ;
Figure
2: The skeleton of the Traversal Specification gramma
interface TreeVisitor -
void visitLeaf(Leaf l);
class SumVisitor implements TreeVisitor -
int
void visitLeaf(Leaf l) - acc += l.val; -
class Top -
Tree tree;
void visitAllLeaves(TreeVisitor tv) -
int sum() -
return v.acc;
In the coming sections we will show different versions of the
same program.
3.2 Some Syntax
We wish to succinctly describe traversals over the object
graph. We do this by specifying a list of actions to be taken
when entering an object of a specified class. As a first ap-
proximation, the traversal specification 1 has the grammar
shown in figure 2.
Before explaining the semantics of the grammar, we transcribe
our summing example to this style.
traversal
uses the term "traversal specification", but
refers to a slightly different concept.
Top =? traverse tree;
Node =? traverse left;
traverse right;
Leaf =? visit visitLeaf;
abstract class Tree -
class Node extends Tree -
Tree left, right;
class Leaf extends Tree -
int val;
class SumVisitor implements visitAllLeaves-Visitor -
int
void visitLeaf(Leaf l) -
acc += l.val;
class Top -
Tree tree;
int sum() -
return v.acc;
The code has been reorganized. The navigation is now
specified in the traversal visitAllLeaves. This organization
is immediately an improvement over the plain Java
code, in that all the code that is part of a behavior has
been grouped together.
The traversal is started by invoking the static method
traverse on the class visitAllLeaves, which is generated by
the system from the traversal specification. We pass the
object (this) to be traversed and the visitor (v) to traverse
it with to the traverse method. The visitor is passed as the
first argument to the traversal when the traversal is invoked,
after which it is implicit.
The traversal proceeds by inspecting the current object
to determine its class. The list of TraversalEntries is scanned
to find the most specific entry for the object. Once an entry
is found, the corresponding list of actions is executed in
order of occurrence. The action traverse p invokes the
traversal recursively on the object o.p. The action visit v
invokes method v of the visitor with the current object o as
argument.
In general, there may be several applicable entries for an
object, as all of its superclasses may have entries. To call
the next most specific entry, the super directive is provided.
Thus, both overriding and extending behavior is supported.
If an object is traversed for which no entry can be found, no
action is performed, and the void result is returned.
The discrimination of which is the most precise entry for
an object could be done using Java's introspection facilities,
but as we have stated that we will be generating Java code
from the traversal specification, we instead use inheritance
information from the class graph to generate efficient Java
code. In later sections, we will be using information from the
class graph to provide types for variables in the traversal.
3.3 Changing the Class Diagram
The motivation behind our suggestion was both to make
recursive programming less painful (by automatically generating
lots of code from a minimal functional description)
only
tree
Tree
Leaf
Top
int
val
OneNode TwoNode
TwoNode
OneOr-
left
right
Figure
3: Class diagram for a more complicated set of trees
and also to make code robust to changes in the class diagram
To illustrate the changes needed, let us modify the class
diagram of our running example slightly, to have Nodes with
just one child as well as the two we've already seen. See
figure 3.
To keep the names meaningful, Node has become Two-
Node. The new traversal specification becomes:
traversal
Top =? traverse tree;
TwoNode =? traverse left;
traverse right;
OneNode =? traverse
Leaf =? visit visitLeaf;
Apart from the global renaming of Node to TwoNode no
further changes are necessary. Obviously, not all modifications
to the class diagram are as simple, but the modifications
required will often be similar. We make this claim
because looking back at the program, we see that the traversal
specification makes only local assumptions about the
class diagram. Specifically, the traversal only assumes that
a given class will be able to access the parts that the Traver-
salEntry for that class wishes to traverse.
It can be argued that the corresponding modifications
needed for the plain Java code are almost as painless as the
changes to the traversal specification. The Java code, how-
ever, will require changes to separated parts of the program,
while the traversal specification organizes the code so that
all the changes are localized to the traversal specification.
3.4 Passing Parameters and Return Values
It is important to be able pass arguments to and to return
results from traversals and visitors. To do this, we extend
our traversal specification grammar to that in figure 4.
In this grammar, both visitors and traversals take arguments
and can return results. The result can either be
bound to a traversal variable (as opposed to instance variables
on the current object) for subsequent use or returned
to the caller. There is also a special variable host, which is
bound to the host object that is being traversed.
The grammar also allows for types to be optionally spec-
ified. While we perform some type inference, it is mainly as
a convenience to avoid having to repeat type declarations.
Traversal ::= traversal TraversalName
TraversalEntry ::= ClassName [returning Type]
Action
ActionRhs ::= traverse PartName Arguments
Parameters ::= ( Parameter [, Parameter]* )
Parameter
Arguments ::= ( Argument [, Argument]* )
Argument ::= VariableName
Figure
4: A more complete Grammar
When dealing with collections, for instance, we require that
the programmer specify the type of the collection's elements.
Section 3.6 covers collections in greater detail.
Let us rework our example using some of the new fea-
tures, so that the accumulator is kept in the traversal rather
than in the visitor.
traversal visitAllLeaves(int
Top =? traverse tree(acc);
traverse right(lacc);
OneNode =? traverse only(acc);
Leaf =? visit visitLeaf(acc);
abstract class Tree -
class Node extends Tree -
Tree left, right;
class Leaf extends Tree -
int val;
class SumVisitor implements visitAllLeaves-Visitor -
void visitLeaf(Leaf l, int sumsofar) -
return
class Top -
Tree tree;
int sum() - return
visitAllLeaves.traverse(this,new
Now the visitor in the example has no internal state; the
accumulator is passed as a return value/parameter 2 . Notice
also that we now pass one more argument than before to the
traverse method; this is because the traversal now takes an
argument.
The threading of the accumulator mimics the original
example, but doesn't illustrate the recursive style we prefer.
It would be more elegant to have the computation proceed
in the natural recursive fashion:
Although, in general, visitors can still have state.
traversal
Top =? traverse tree();
Node =? int lacc = traverse left();
int traverse right();
visit combine(lacc,racc);
Leaf =? visit visitLeaf();
abstract class Tree -
class Node extends Tree -
Tree left, right;
class Leaf extends Tree -
int val;
class SumVisitor implements visitAllLeaves-Visitor -
void visitLeaf(Leaf l) - return l.val; -
int combine(Node n,int left, int right) -
return left+right;
class Top -
Tree tree;
int sum() - return
visitAllLeaves.traverse(this,new SumVisitor());
The program now looks much more like what a functional
programmer would expect. The result is inductively
computed from the results of the subtrees. The traversal is
responsible for passing data, and the visitor is responsible
for computing data. The two sub-results are combined by
calling a method on the visitor.
Traversing Non-Parts
It is sometime desirable to traverse objects that are not connected
to object structure being traversed. An example of
this would be to traverse an object that is the result of some
previous computation. One way to achieve this is to make a
visitor method that invokes the traversal on the object ar-
gument, but this approach has the drawback of hard-coding
the name of the traversal into the visitor.
Instead, we extend the traverse directive to traverse
over traversal variables in addition to instance variables of
the host object. Traversal variables shadow the instance
variables of the host. To reflect this added capability, the
nonterminal PartName in our grammar is changed to Part-
OrVarName, but the language parsed is unchanged.
We give an example of how this facility may be used in
the next section.
3.6 A special case: Collections
Collections are so generally useful that it makes sense to
deal with them specially. We propose to do this by adding
an operator called for-each to the traversal specification
language. The grammar is shown in figure 5. The goal of
for-each is to emulate a fold, or reduction, [FH88] over the
collection. Since our target language is Java, we use the iteration
construct offered by its class library: Enumerations.
All we require of the Collection class is that it has a method
called that returns an Enumeration. Our strategy
is to use the methods provided by Enumeration to access
each element in the collection, perform a traversal on each
element thus accessed, and combine the results using the
method name passed to for-each.
ActionRhs ::= traverse PartOrVarName Arguments
method MethodName Arguments
for-each PartOrVarName
MethodName Argument Arguments
Figure
5: Reducing collections.
Vector
Top
Complex
nums
Figure
A Vector of Complex numbers
The collection to be traversed is in PartOrVarName of
the current host. This indirection is necessary because there
might be several collections (Vectors) in our class graph,
and we need to generate different traversal code for each,
depending on the types of the elements it contains.
for-each calls the MethodName method of the visitor
once for the result of each element in the collection, passing
it two arguments: the result of traversing the element
and the accumulated result. The first Argument is the ac-
cumulator's initial value, and the remaining Arguments (in
parentheses) are passed to the traversal of each element. The
result of traversing the collection is the the accumulated result
To illustrate, let us analyze the use of for-each to find
the maximum magnitude of a list of Complex numbers. Let
our class diagram look like figure 6. Top has a part called
nums that leads to a collection of Complex.
To traverse the collection, we might write:
traversal
Top =? double
for-each nums calcmax maxinit();
Complex
returning double =? visit magnitude();
class MaxVisitor implements foldComplex-Visitor -
double getinit(Vector v) - return 0.0; -
double calcmax(double magn, double maxsofar) -
return Math.max(magn,maxsofar)
double magnitude(Complex comp) - return
class Top -
Vector nums;
double findmax() - return
foldComplex.traverse(this,new MaxVisitor());
When for-each nums calcmax maxinit() is invoked by
the traversal, the current object is an instance of Top, and
the collection to be traversed is in its nums instance vari-
able. An Enumeration is created by invoking the method
elements() on the collection. The traversal iterates over all
the elements of the enumeration, using code that is functionally
equivalent to the following snippet:
double foreach(
MaxVisitor v, Enumeration enum, double initacc) -
double
while
Complex
double
return acc;
Three issues are worth noting.
1. As mentioned above, we use type information from
the class diagram to determine that we are traversing
a collection of Complex, which allows us to perform the
proper cast to the result of nextElement().
2. However, the class diagram does not contain enough
information for us to determine all the types that we
need to know in order to produce the above snippet of
code. One such type is the type of traversal result,
which we assert is double. We must do type inference
to determine these types, which is discussed in section
5.2. In this particular case, the fact that maxinit is a
double is not enough; we also need to know that the
result of traversing Complex is double. In this above
code, we explicitly annotate the entry for Complex, but
in other cases this might be inferable by the system.
3. After retrieving an Object from a collection, we cast it
to the type we know it should have, and then proceed
to traverse it.
In some cases, we don't want to traverse entire collec-
tions, but instead only a subset thereof. In section 3.5 introduced
the concept of traversing variables from the traversal.
We can use this facility to create a suitable collection before
traversing it.
For example, if we had wished to only find the maximum
magnitude of every other Complex value, we might
have written our traversal like this:
traversal
Top =? double
for-each mynums calcmax maxinit();
Complex
returning double =? visit magnitude();
and added the following method to the visitor:
Vector makemynums(Top t) -
Vector new Vector();
Enumeration
while
res.addElement(e.nextElement());
if (e.hasMoreElements()) nextElement();
return res;
The makemynums method makes a collection containing
every other element from the original, which is then traversed
using for-each.
3.7 Controlling the Traversal
We would like the traversal to be able to make decisions
at runtime as how to proceed. For example, we might be
searching for some item in a binary tree; depending on the
value stored in the node, we need to search the left subtree
or the right. Once we have found the value, the traversal is
done.
To this end, we introduce the thunk directive, which produces
a thunk from a list of Actions. Its grammar is in figure
8. A thunk can reference the variables that are visible where
it is declared, but cannot change their values. Thunks are
Java objects that have only one method - invoke. They
are typically passed to methods, but can also be invoked by
the traversal through the invoke directive.
The return type is determined from the body of the
thunk. Since thunks only have return types, this is the basis
for their class name. A Thunk int is a thunk that returns
an int when invoked.
To illustrate, figure 7 shows how thunks might be used
with a binary search tree. Each element of the tree is either
a Node, containing two subtrees, or a Leaf, containing an
Item. At each Node, we create two thunks - one to search
either subtree - and pass them to the visitor. The visitor
determines which subtree to search and invokes the proper
thunk.
Using thunks, a wide variety of quite complex behaviors
can be programmed. In addition to searching binary trees,
they provide a convenient way to traverse cyclic objects.
In some situations it is desirable to repeatedly perform the
same traversal, for example iterating a stateful operation to
a fixpoint. Thunks may be invoked several times in a row,
allowing such computations to expressed without a creating
a separate traversal for the iterative computation. Instead
of invoking the thunks immediately, the visitor can decide
to store them in some table, and invoke them only if it turns
out that their results are needed at a later time.
A small extension to what we have implemented would
be to introduce memoizing thunks that cache their results;
subsequent invocations would just return the cached result
instead of reperforming the traversal.
3.8 Calling Other Traversals
Being able to call other traversals is very useful, as it enables
us to split common abstractions into units that can be reused
by several other traversals. The translation is extended in
a straightforward way; each traversal generates an interface
that visitors that are to be used with that traversal must
implement. If a visitor is to be used with several traversals,
the visitor will need to implement each of their interfaces.
Another use of multiple traversals is to encode state in
the traversals; depending on which traversal is active, the
computation is in a certain state. If the traversals are mutually
recursive, they must be specified in a block, as they
must be type checked as a group.
Figure
8 shows the final version of the grammar. One
traversal may call another traversal on a specified part by
using the othertrav directive. This is a recursive call; once
the called traversal terminates, the calling traversal resumes.
Traversals ::= Traversal [and Traversal]*
Traversal ::= traversal TraversalName
TraversalEntry ::= ClassName [returning Type]
Action
ActionRhs ::= traverse PartOrVarName Arguments
PartOrVarName Arguments
for-each PartOrVarName
MethodName Argument Arguments
Parameters ::= ( Parameter [, Parameter]* )
Parameter
Arguments ::= ( Argument [, Argument]* )
Argument ::= VariableName
Figure
8: The complete grammar
Person
Married
Vector
lings
Married
sibs
Siblings
element
spouse
Figure
9: The Inlaws Example
The inlaws problem [Wer96] illustrates a weakness in the visitor
pattern. When an object plays different roles depending
on the context in which it appears, giving role-specific behavior
becomes tedious. One workaround is to keep a state
variable in the visitor that is updated during the traversal to
reflect the behavior the visitor should have. This approach
forces the behavior of the visitor to be aware of navigational
details, which is contrary to the philosophy of the visitor
pattern.
Figure
9 shows a class diagram for the inlaws problem.
A Person can be Married or NotMarrried, and has zero or
more siblings. An inlaw is the spouse of a sibling or a sibling
of a spouse.
We start out with a person, and now wish to apply some
operation (an inlaw visit, for example) to all the inlaws of
that person. The visitor pattern has difficulty expressing
this problem because the class Person plays the role of self,
sibling, spouse, and inlaw. The behavior of the visitor depends
on the role, not the class.
int
tree
right
left
Tree
Top
item
{
else return left.invoke();
{
Top => traverse tree(goal);
traverse right(goal);
visit choose(goal, gol, gor);
{
traverse item(goal);
traversal binsearch(int
traverse left(goal);
visit atleaf(goal, getitem);
class SearchVisitor implements binsearch_Visitor {
if (goal > n.key) return right.invoke();
{
Item atleaf(Leaf l, int goal, Thunk_Item get) {
if (goal == n.key) return get.invoke();
else return null;
Figure
7: Code to search a binary tree
Borrowing from solutions to similar problems in functional
programming, we solve this problem by encoding the
state of the visitor in the traversal name. Instead of having
one traversal, we have four. The first traversal starts from
the person whose inlaws we seek:
traversal
Person =? othertrav mysibling siblings();
Married =?
visit combine(spouse, sibs);
NotMarried =? super();
Anybody's siblings may lead to inlaws, so both Married
and NotMarried delegate to the traversal entry for their superclass
Person. A Married person also has inlaws via the
spouse. We traverse to those using the myspouse traver-
sal. Finally, the results from a Married person's spouse and
siblings are combined using the combine visit on the visitor.
and traversal
Siblings
for-each sibs reduce val();
Married =? othertrav myinlaw spouse();
NotMarried =? visit initval();
A list of siblings is dealt with by using for-each. The
visitor is assumed to have a visit initval that returns a
suitable initial value for the reduction. The reduction is
performed using the method reduce on the visitor. If a
sibling is Married then we have found an inlaw: the spouse.
If the sibling is NotMarried, then initval is used to return
a suitable result.
and traversal
Person =? othertrav myinlaw siblings();
In a similar vein, any siblings of a spouse are also inlaws.
and traversal
Siblings
for-each sibs reduce val();
Person returning int =? visit inlaw();
When we have found an inlaw, we invoke the inlaw visit
on the visitor. We specify a return type for the inlaw
visit, so as to give the type checker a type to propagate.
The type checker will determine that all the entries of the
traversals myself, mysibling, myspouse, and myinlaw return
the same type. As Java doesn't allow polymorphic
method signatures, we have to specify at least one of them.
Above, we chose to annotate the Person entry as returning
int.
Like the sibling, a list of inlaws is reduced to one value
by using for-each and initval. Each inlaw is visited, and
the results are reduced.
We can now use these traversals to count the number of
inlaws of a person.
class InlawsCounter
implements myself-Visitor, mysibling-Visitor,
myspouse-Visitor, myinlaw-Visitor -
int initval() - return 0; -
int combine(Married h, int a, int b) -
return reduce(a,b);
int reduce(int a, int b) - return a+b; -
int inlaw(Person host) - return
class Person -
int countinlaws() -
return myself.traverse(this,new InlawsCounter());
5 Translating to Java
In the following section we describe the details of the translation
of traversals into Java. The translation is fairly straight-
forward; the largest task in the translation is performing
type inference on the traversal specification. The process of
type checking incidentally also verifies that the traversal is
consistent with the class diagram.
Our code generation strategy assumes that the classes
over which we are traversing are be written beforehand, and
cannot be recompiled. This allows us to traverse over object
structures from third party classes, so long as we are
able to construct a class diagram for the subset of the object
structure we wish to traverse. However, developing the
traversal and the classes traversed in parallel is equally well
supported, since the traversal specifications are translated
into Java classes, which can be compiled separately or together
with the user's classes.
5.1 Class Diagram
In order to infer the types for the traversal, we must know
the structure of the program in which it is to be used. The
class structure is needed for type-checking and to insert the
proper casts for elements extracted from collections.
The class diagram describes the type structure of the
program. We have used several graphical representations of
class diagrams already (for example figure 9). The class diagram
encodes three different types of edges between classes.
ffl Part edges relationships are described by the parts
(or variables) of a class. These edges can be traversed
explicitly through the traverse directive. All part
edges are named.
ffl Inheritance edges When an object is traversed, the
most precise TraversalEntry for that class is executed.
An entry can choose to extend, rather than override,
the behavior of its superclass by invoking the super
directive.
ffl Collection edges are the implicit edges inside collection
classes that point to each element. These edges
can be traversed by a for-each directive. All collection
edges are named.
It would be nice if an inheritance edge c ' c 0 between
classes c and c 0 allowed the return type of the traversal of
c 0 to be a subtype of c. Unfortunately, Java forbids overriding
inherited methods with methods having different return
types, so the constraint is strengthened to an equality.
5.2 Type Checking and Consistency
In the input grammar, all types are optional. We must know
the type of all variables and the return types of all traversals
in order to generate code for them. The result of type
checking is to determine these types. Additionally, we need
to generate an interface that all visitors to be used with
the traversal must implement, which also requires detailed
information about the types of the system.
While the class diagram and traversal specification do
specify many of the types involved, they do not specify all.
Typically, for many of the classes in the traversal, a return
type will not be specified, and for-each needs to infer the
type returned by traversing an object from the collection.
Since there are no function types or polymorphism, we
can infer types by simple constraint-solving [Wan87]. Since
all constraints are equivalence relations, it is possible to sort
all type variables into equivalence classes in O(nff(n)) time,
where ff is the inverse Ackerman function and realistically
never returns a value greater than 4 [CLR90]. The rules for
generating the constraints are given in figure 10. We take
the opportunity to both check that the traversal is consistent
with the class diagram and to generate the interface for
visitors to be used with the traversal.
A traversal is considered consistent with the class diagram
if we can verify that all parts that the traversal mentions
exist where the traversal expects to find them. Also,
the part used for for-each must be a collection class. Both
of these conditions are verified as a side effect of generating
the constraints. The constraints are of the form
where t1 and t2 are type variables or types
Once we have generated the constraints we solve them
by sorting type variables into equivalence classes. If we find
that this is not possible, we have a typing error. If we find
that some type variables are not equivalent to any Java type,
then the traversal is polymorphic in those type variables.
Since Java does not yet have parametric polymorphism, we
currently regard this as a type error. In the inlaws example
in the previous section, we had to specify the return type of
the inlaws visit to avoid the traversal becoming polymorphic
Once we have determined the values for all type vari-
ables, we are ready to generate Java code.
5.3 Generated Java
Figures
11 and 12 show the details of how Java code
is generated. The strategy is to generate a class for each
traversal. For each class in the application a method with
the name entry classname is generated. If a class has a
traversal entry, that is used to generate the body of the
method. Otherwise, a default entry that delegates to the
entry of the superclass is used. The method for java.lang.Ob-
ject is empty by default. Additionally, methods to find the
most precise traversal entry for an object (effectively simulating
dynamic dispatch) are generated. These dispatch
methods have the name traverse, and are overloaded on
the type of their first argument.
After type inference, we know unambiguously the signature
for any visitor method. In order to satisfy the type
safety requirements of Java, we output these signatures into
a Java interface traversalname Visitor, and require all Visitors
used with the traversal to implement the interface. The
interface allows the visitor and the traversal it is to be used
with to be compiled separately.
By generating the traversal code into a separate class,
we are able to traverse any class so long as we are able to
produce a class diagram for it. This allows us to traverse
classes we are not able to recompile, but we need to simulate
dynamic dispatch on traversal entries by searching for and
invoking the most specific traversal entry for an object.
Furthermore, by putting the traversal code into one class,
it becomes possible to use traditional techniques such as
subclassing to extend and evolve existing traversals. This is
discussed in section 6.
6 Future Work and Conclusion
One of the uses of the visitor pattern is to extend the functionality
of existing classes [FFK98]. The typical visitor
pattern takes the traversal of the class graph as given; the
visitor always traverses the entire object graph. Under this
traversal name
classname returning type )
return type of traversal of classname = vn
A
\Theta\Theta
ar
A
\Theta\Theta
ar
A
\Theta\Theta
ar
AR
\Theta\Theta
traverse part (args)
type of the traverse of part
- type of travparam
AR
\Theta\Theta
othertrav trav part (args)
traversing part
- type of travparam
AR \Theta\Theta
visit meth (args)
return type of meth
- type of host
- type of methparam
AR \Theta\Theta
method meth (args)
return type of meth
- type of methparam
AR \Theta\Theta
varname
AR \Theta\Theta
for-each part meth initvar(args)
a,b,c,d be fresh type variables in
- type of collection of d
return type of meth
first argument to meth
second argument to meth
return type of traversal of d
- type of travparam
returning vn
AR \Theta\Theta
invoke thunk
returning v
Figure
10: Type Constraint Generation Rules
For each type in the classgraph
that is returned from a thunk
abstract class Thunk typef
abstract type invoke();
traversal name(params)
class name Traversal f
name Visitor thevisitor;
name(name Visitor visitor) f
For each class in the classgraph
subclassn be the immediate subclasses of class in
type traverse(class host, paramlist) f
if (class instanceof subclass1 )
return this.traverse((subclass1) host,paramlist);
else if (class instanceof subclassn)
return this.traverse((subclassn) host,paramlist);
else return this.entry class(host,paramlist);
classname returning type
an ;735
(name,params)
classname host,
name Visitor visitor,
params
return new name Traversal(visitor)
type entry classname(classname host, params) f
A[[an ]](name, true)
Figure
11: Java translation for Traversals, part 1.
A
\Theta\Theta
ar
else AR[[ar]](name, ret, "type varname =")
A
\Theta\Theta
ar
else AR[[ar]](name, ret, "")
AR
\Theta\Theta
traverse part (args)
(name, binder)
part is a traversal variable
host.part() , else (part is a host variable) in
binder this.traverse(getstr,args);
AR
\Theta\Theta
othertrav travname part (args)
(name,binder)
part is a traversal variable
host.part() , else (part is a host variable) in
binder travname.traverse(
getstr,
(travname Visitor) thevisitor,
args
AR
\Theta\Theta
visit meth (args)
(name, binder
AR
\Theta\Theta
super (args)
superclass be the closest superclass of class in
binder entry superclass(host,args);
AR
\Theta\Theta
varname
AR
\Theta\Theta
for-each part meth initvar(args)
(name, binder)
and travresulttype be inferred types
part is a traversal variable
host.part() , else (part is a host variable) in
java.util.Enumeration
inittype
while
travresulttype
binder acc;
a 1
an ;g735
A[[an ]](name, true)
AR
\Theta\Theta
invoke thunkname
Figure
12: Java translation for Traversals, part 2
system, we are able to extend behavior simply via inheritance
on the visitor.
In our system, since we have separate traversal specifica-
tions, it would make sense to be able to extend traversals in
addition to visitors. A traversal entry for a class would over-ride
the traversal entry for its super class. The overridden
entry can still be invoked via Java's super mechanism. How-
ever, we have already used that keyword to invoke the entry
for the current object's superclass. By allowing traversal
specifications to subclass each other, we introduce a second
dimension of overriding, thus raising the question of which
overridden behavior we wish to invoke with a super call.
The combination two dimensions of behavioral specialization
is an issue for further study.
We have a prototype system implemented, and have sketched
out in detail how to combine traversal specifications
with the Dem/Java programming environment. Dem/Java
is a visitor-pattern-based programming environment that
has been extensively used in a number of medium scale programming
projects. It incorporates succinct traversal strategies
that make expressing recursive algorithms difficult. The
two systems complement each other - Dem/Java simplifies
traversals of large class graphs, while traversal strategies
allow elegant specification of recursive computations over
smaller subgraphs. However, due to time constraints, the
combination has reached only a very preliminary stage, with
many features missing.
--R
Introduction to Algorithms.
Functional Programming.
Design Patterns: Elements of Reusable Object-Oriented Software
Component enhancement: An adaptive reusability mechanism for groups of collaborating classes.
Adaptive Object-Oriented Soft- ware
Preventative program maintainance in deme- ter/java (research demonstration)
A simple algorithm and proof for type inference.
Personal Communication to the Demeter Seminar
--TR
Introduction to algorithms
Design patterns
Preventive program maintenance in Demeter/Java
Adaptive Object-Oriented Software
Component Enhancement
Synthesizing Object-Oriented and Functional Design to Promote Re-Use
--CTR
Jeff Gray, Using software component generators to construct a meta-weaver framework, Proceedings of the 23rd International Conference on Software Engineering, p.789-790, May 12-19, 2001, Toronto, Ontario, Canada
Jeff Gray , Ted Bapty , Sandeep Neema , James Tuck, Handling crosscutting constraints in domain-specific modeling, Communications of the ACM, v.44 n.10, p.87-93, Oct. 2001
Ralf Lmmel , Eelco Visser , Joost Visser, Strategic programming meets adaptive programming, Proceedings of the 2nd international conference on Aspect-oriented software development, p.168-177, March 17-21, 2003, Boston, Massachusetts
Joost Visser, Visitor combination and traversal control, ACM SIGPLAN Notices, v.36 n.11, p.270-282, 11/01/2001
Arie van Deursen , Joost Visser, Source model analysis using the JJTraveler visitor combinator framework, SoftwarePractice & Experience, v.34 n.14, p.1345-1379, November 2004 | recursive programming;separation of concerns;visitor pattern |
320425 | Age-based garbage collection. | Modern generational garbage collectors look for garbage among the young objects, because they have high mortality; however, these objects include the very youngest objects, which clearly are still live. We introduce new garbage collection algorithms, called age-based, some of which postpone consideration of the youngest objects. Collecting less than the whole heap requires write barrier mechanisms to track pointers into the collected region. We describe here a new, efficient write barrier implementation that works for age-based and traditional generational collectors. To compare several collectors, their configurations, and program behavior, we use an accurate simulator that models all heap objects and the pointers among them, but does not model cache or other memory effects. For object-oriented languages, our results demonstrate that an older-first collector, which collects older objects before the youngest ones, copies on average much less data than generational collectors. Our results also show that an older-first collector does track more pointers, but the combined cost of copying and pointer tracking still favors an older-first over a generational collector in many cases. More importantly, we reopen for consideration the question where in the heap and with which policies copying collectors will achieve their best performance. | INTRODUCTION
Dynamic memory management (management of heap-allocated ob-
jects) using garbage collection has become part of mainstream computing
with the advent of Java, a language that uses and requires
This work is supported in part by NSF grant IRI-9632284, and by
gifts from Compaq Corp., Sun Microsystems, and Hewlett-Packard.
Kathryn S. M c Kinley is supported by an NSF CAREER Award CCR-
9624209. Any opinions, findings, and conclusions or recommendations
expressed in this material are those of the author(s) and do not
necessarily reflect the views of the National Science Foundation or
other sponsors.
To appear in OOPSLA'99, Denver, November 1999.
garbage collection. This wider use of garbage collection makes it
more important to ensure that it is fast. Garbage collection has been
investigated for decades in varying contexts of functional and object-oriented
language implementation (e.g., Lisp, ML, Smalltalk). The
consensus, for uniprocessor systems operating within main memory,
is that a class of algorithms known as generational copying collection
performs quite well in most situations. While the breadth of variation
within the class is considerable, the algorithms have this in common:
objects are grouped according to their age (time elapsed since object
allocation), and the younger groups or generations are examined more
often than older ones. In particular, the most recently allocated objects
are collected first. In this paper, we present a new copying collection
algorithm, called Older-First, that maintains the grouping by age, but
chooses to collect older objects (following a particular policy which
we describe in Section 2). Our algorithm achieves lower total cost,
sometimes dramatically, than traditional copying generational collection
for a number of Java and Smalltalk programs. Why does it improve
performance?
Let us consider the costs that copying garbage collection imposes
on the run-time system. First, there is the cost of copying objects
when they survive a collection. Second, to allow the collector to examine
only a portion of the heap at a time, bookkeeping actions must
log changes to pointers (references) that go from one portion to an-
we call this pointer-tracking. Some of the pointer-tracking is
interleaved with program execution whenever the program writes a
pointer (i.e., the write barrier), while some is done at garbage collection
time. Third, the program itself and the garbage collection algo-
rithm(s) have different cache and memory behaviors, which interact
in complex ways. These effects are beyond the scope of this paper
and are left for future work. In this paper, the total cost of collection
refers to combined cost of the pointer tracking and copying collection.
Generational copying collection performs better than non-
generational, i.e., full heap, copying collection because it achieves
markedly lower copying costs. On the other hand, it must incur the
cost of pointer tracking, whereas non-generational collection has no
need to track pointers because it always examines the entire heap.
Thus, generational collection incurs a pointer-tracking cost that is off-set
by a much reduced copying cost. We have discovered that there is
a trade-off between copying and pointer-tracking costs that can be exploited
beyond generational copying collection. Our Older-First (OF)
algorithm usually incurs much higher pointer-tracking costs than generational
algorithms, but also enjoys much lower copying costs. We
find that most pointer stores and the objects they point to are among
the youngest objects, and by moving the collected region outside these
youngest objects, OF must track more pointers. However, OF lowers
copying costs because it gives objects more time to die, and does not
collect the very youngest objects which clearly have not had time to
die. In the balance, its total cost is usually lower than the total cost
of generational copying collection, in some cases by a factor of 4. In
itself, OF is very promising, but, more importantly, its success reveals
the potential for other flexible collection policies to exploit this trade-off
and further improve garbage collection performance.
In Section 2 we describe our new collection algorithm within a
broader classification of age-based algorithms. We present our benchmark
suite in Section 3, and assess the copying performance of the
family of age-based collectors in Section 4. We then consider implementation
issues, including a new fast write barrier in Section 5. Section
6 evaluates the combined costs of copying and pointer-tracking.
The results call for a reevaluation of the premises and explanations of
observed performance of copying collectors, which is the subject of
Section 7.
Upon a garbage collection, each scheme we consider partitions the
heap into two regions: the collected region C, in which the collector
examines the objects for liveness, and if live, they survive the collec-
tion; and the uncollected remainder region U , in which the collector
assumes the objects to be live and does not examine them. The non-
generational collector is a degenerate case in which the uncollected
region is empty. The collector further partitions the set C into the set
of survivor objects S and the set of garbage objects G, by computing
root pointers into C and the closure of the points-to relation within C.
To make the freed space conveniently available for future allocation,
the collector manipulates the survivors S by copying (or compacting)
them.
The amount of work involved is, to a first approximation, proportional
to the amount of survivor data, and so that should be minimized.
Ideally we choose C so that S is empty; in the absence of some ora-
cle, we must look for schemes that organize heap objects so that the
partition into C and U is trivial, and then find heuristics that make S
small.
We restrict attention to a class of schemes that keep objects in a
linear order according to their age. Imagine objects in the heap as if
arranged from left to right, with the oldest on the left, and the youngest
on the right, as in Figure 1. The region collected, C, is restricted to be
a contiguous subsequence of this sequence of heap objects, thus the
cost of the initial partition is practically nil. We call these schemes
age-based collection.
Traditional generational collection schemes are, in the main, age-
based: the region collected is some subsequence of youngest (most
recently allocated) objects. Copying collectors may reorder objects
somewhat during copying since they typically follow pointers
breadth-first instead of in age order. In compacting collectors, re-ordering
does not occur.
In this paper, we introduce and categorize alternative collection
schemes according to their choice of objects for collection. In all
these collectors, we fix the size of the collected region rather than
allowing it to vary during program execution, to simplify our analysis.
Previous research shows that dynamically sizing the collected region
can improve performance [29, 36, 34, 1, 5], but this investigation is
beyond the scope of our paper.
A youngest-only (YO) collector always chooses some youngest
(rightmost) subsequence of the sequence of heap objects (Figure 2).
In our implementation, the YO collector fills the entire heap and then
repeatedly collects the youngest portion of the heap including objects
surviving the last collection. The time in allocation between collections
is the amount the YO collector frees. This collector might have
good performance if object death is only, or mainly, among the new
objects.
Generational collector schemes are variants of youngest-only col-
lection, differing however in how they trigger collections [5]. In the
basic design [17, p.147], new allocation is into one fixed-size part of
the heap (the nursery), and the remainder is reserved for older objects
oldest youngest
allocation
direction
Figure
1: Viewing the heap as an age-ordered list.
jCj: collected region U : region(s) not collected)
region of survivors : area freed for new allocation
Legend for Figures 2-5.
youngest
Collection 1
U
U Collection 2
oldest
Figure
2: Youngest-only (YO) collection.
from
reserve
from
reserve
youngest
Collection 1
U Collection 2
reserve
nursery
older generation
oldest
reserve
reserve
put on
freed
U
freed
freed
oldest
oldest
(full heap)
Figure
3: Generational youngest-only collection.
(the older generation). Whenever the nursery fills up, it is collected,
and the survivors are promoted to the older generation (Figure 3).
When the older generation fills up, then the following collection collects
it together with the nursery. In a two-generation collector, that
collection considers the entire heap.
Note that the generational collector deliberately does not allocate
directly into the space reserved for the older generations, so that,
unlike YO, the region chosen for collection contains exactly the objects
allocated since the last collection (except for full heap collec-
tions). We study two and three generation schemes: 2G (2 Genera-
tions; youngest-only) and 3G. We assume the size of each generation
is strictly greater than 0, and therefore 3G never degenerates into 2G. 1
An oldest-only (OO) collector always chooses an oldest (leftmost)
subsequence of the sequence of heap objects (Figure 4). In our imple-
mentation, the OO collector initially waits for the entire heap to fill
and then repeatedly examines the oldest objects including those surviving
the previous collection. As in the YO collector, only the resulting
free amount is available for allocation. An object is more likely
to be dead the longer we wait, hence the OO collector might have
good performance. Of course, it will suffer if there are any objects
that survive the entire length of the program because it will copy them
repeatedly.
An older-first (OF) collector chooses a middle subsequence of
heap objects, which is immediately to the right of the survivors of the
previous collection (Figure 5). Thus the region of collection sweeps
the heap rightwards, as a window of collection. The resulting free
blocks of memory move to the nursery. Initially, objects fill the entire
heap and the window is positioned at the oldest end of the heap. After
collecting the youngest or right end of the heap, the window is reset
to the left or old end.
The intuition for the potentially good performance of this collector
can be gleaned from the diagram in Figure 6, which shows a series of
eight collections, and indicates how the window of collection moves
across the heap when the collector is performing well. If the window
is in a position that results in small survivor sets (Collections 4-8),
then the window moves by only that small amount from one collection
to the next. The remaining window size is freed and becomes available
for allocation. As the window continues to move slowly, it remains
for a long time in the same region, corresponding to the same age of
objects. A great deal of allocation takes place without many objects'
being copied; almost a window size between successive collections.
How long the window remains in a good position, and how long it
takes to find this "sweet spot" again once it leaves, will determine the
performance of the collector for a particular workload, heap size, and
window size.
We refer to the OF, OO, and YO collectively as FC collectors
Collection window). The base point of our comparisons is
the non-generational collector (NG), which considers the entire heap
in each collection. Note that it is possible for an FC collector to find
no garbage in the collected region. If that happens, we let the collector
fail for the purposes of this study. (An implementation could increase
the heap size temporarily, or retry collection on another region, perhaps
the whole heap, or increase the window size adaptively.) Because
generational schemes by design occasionally consider the whole heap,
they enjoy an advantage over the new schemes as simulated here.
Table
lists our benchmarks, which include Smalltalk and Java programs
and their basic properties relevant to garbage collection perfor-
mance: amount of data allocated in words (each word being 4 bytes),
number of objects allocated, maximum live amount (which is also
We also examined a scheme in which the older generation is allowed
to grow into the nursery, and vice versa [1], but it performed
similarly to 2G and 3G.
oldest youngest
U
Collection 1
Collection 2
Figure
4: Oldest-only collection.
youngest
U
U
Collection 2
oldest
U
Collection 3
U C
Figure
5: Older-first collection.
Collection 1
Collection 2
Collection 4
Collection 5
Collection 6
Collection 7
Collection 8
Collection 3
oldest youngest
Figure
Older-first window motion example.
Pointer stores
Benchmark Words alloc. Objects alloc. Max. live total alloc./st. non-null %
Java
Bloat-Bloat 37 364 458 3 429 007 202 435 4 927 497 7.58 4 376 798 88.8
Toba 38 897 724 4 168 057 290 276 3 027 982 12.85 2 944 672 97.2
StandardNonInteractive 204 954
Tree-Replace-Binary 209 600
Tree-Replace-Random 925 236 189 549 13 114 168 513 5.49 140 029 83.1
Richards 4 400 543 652 954 1 498 763 626 5.76 611 767 80.1
Table
1: Benchmark Properties
the minimum required heap size to execute the program), total number
of pointer stores, words of allocation per pointer store, number of
non-null pointer stores, and the percentage of pointer stores that are
non-null.
We now describe individual benchmarks, providing where possible
details of their structure. Our set of Java programs is as follows:
ffl JavaBYTEmark. A port of the BYTEmark benchmarks to Java,
from the BYTE Magazine Web-site.
ffl Bloat-Bloat. The program Bloat, version 0.6, [21] analyzing
and optimizing the class files from its own distribution.
ffl Toba. The Java-bytecode-to-C translator Toba working on
Pizza [22] class files [23].
Our set of Smalltalk programs is as follows:
ffl StandardNonInteractive. A subset of the standard sequence of
tests as specified in the Smalltalk-80 image [12], comprising
the tests of basic functionality.
ffl HeapSim. Program to simulate the behavior of a garbage-collected
heap, not unlike the simplest of the tools used in this
study. It is however instructed to simulate a heap in which object
lifetimes follow a synthetic (exponential) distribution, and
consequently the objects of the simulator itself exhibit highly
synthetic behavior.
ffl Lambda-Fact5 and Lambda-Fact6. An untyped lambda-calculus
interpreter, evaluating the expressions 5! and 6! in the
standard Church numerals encoding [4, p.140]. Previously used
in Ref. [15]. We used both input sizes to explore the effects of
scale.
ffl Swim. The SPEC95 benchmark, translated into Smalltalk by
the authors: shallow water model with a square grid.
ffl Tomcatv. The SPEC95 benchmark, translated into Smalltalk by
the authors: a mesh-generation program.
Tree-Replace-Binary. A synthetic program that builds a large
binary tree, then repeatedly replaces randomly chosen subtrees
at fixed height with newly built subtrees. (This benchmark was
named Destroy in Ref. [15, 14].) Tree-Replace-Random is a
variant which replaces subtrees at randomly chosen heights.
ffl Richards. The well-known operating-system event-driven simulation
benchmark. Previously used in Ref. [15].
The idea of Older-First collection sufficiently diverges from established
practice that it is instructive first to determine whether it is
feasible in principle, before going into the details of an implemen-
tation. With the understanding that pointer-tracking costs are likely
to be higher in older-first collection than in generational collection,
we sought a quick estimate of copying cost to discover if the promise
of
Figure
6 is delivered on actual programs. We built an object-level
simulator that executes the actions of each of the collectors exactly as
depicted in Figures 2-5. The simulator is much simpler than the actual
implementation: objects and collection windows of arbitrary sizes are
allowed, the age order is perfectly preserved on collection, and pointers
are not tracked. This simulator can produce the statistics of the
amount of data copied over the run of a program, which, divided by
the amount allocated, gives the "mark/cons" ratio, traditionally used
as a first-order measure of garbage collector performance.
We now discuss the copying cost estimate results for two Java
benchmarks, JavaBYTEmark and Bloat-Bloat, then summarize and
make some general observations. Figures 7 and 8 each present two
graphs: Graph (a) compares the best performance of each collection
scheme (OO, YO, OF, 2G, 3G), plotting the mark/cons ratio (the copying
cost that we would like to minimize), relative to NG, against heap
size. Performance depends on the heap size available to the collector,
which is laid along the horizontal axis. For each heap size, we simulated
many configurations of each collection scheme. This graph only
includes the best configuration of each collector. Graph (b) provides
details of different configurations of each collector for one representative
heap size, plotting the relative mark/cons ratio against the size
of the collected region or nursery as fraction of the heap size.
JavaBYTEmark. For this program, the OF scheme copies significantly
less data than all other schemes under all configurations. In
fact, it copies over a factor of 10 fewer objects than the 3G collector.
As we see in Figure 7(b), it attains this performance even while keeping
the window of collection small: 20% of total heap size. In smaller
heaps not shown here, the best window size for OF grows up to 40%
of the heap. The generational collectors in Figure 7(b) only approach
their best configurations when the nursery constitutes over 50% of the
heap. Thus, the OF scheme copies much less using a smaller window
size. Small window sizes are desirable because they contribute
to keeping pause times for collection short, which is especially important
in interactive programs.
The reason for this dramatic reduction in copying cost is exactly
the scenario described in Figure 6. Many objects wait until middle
age to die, and the OF collector is able to find them just as they die,
100000150000 200000250000300000 350000400000
Mark/cons
ratio
of
best
configuration
(relative
to
Heap size (words)
JavaBYTEmark
OO
YO
OF
Mark/cons
ratio
(relative
to
Fraction collected (of total heap size 238885)
JavaBYTEmark
OO
YO
OF
(b) Representative heap size.
Figure
7: Copying cost estimates, JavaBYTEmark.0.20.61
400000 600000 800000 1e+06
Mark/cons
ratio
of
best
configuration
(relative
to
Heap size (words)
Bloat-Bloat
OO
YO
OF
Mark/cons
ratio
(relative
to
Fraction collected g (of total heap size 446984)
Bloat-Bloat
OO
YO
OF
(b) Representative heap size.
Figure
8: Copying cost estimates, Bloat-Bloat.
and to stay in a sweet spot for a long time. The OF collector does
occasionally sweep through the heap and as a result revisits the oldest
objects repeatedly. When we examine the lifetimes of the objects in
this program [25] we find there are a number of long lived objects.
Thus, the OF collector is repeatedly copying these objects (whereas
generational collectors by design rarely copy these objects); nevertheless
it copies a factor of 10 less data.
As it is the trend in most of the benchmarks, OF collection out-performs
OO and YO. OF collection achieves similarly low copying
costs that are also integer factors better than the generational collectors
using a small window size on StandardNonInteractive, HeapSim,
Richards, Lambda-Fact6, and Lambda-Fact5.
Bloat-Bloat. Figure 8(a) illustrates that the best configurations of
OF, 2G, and 3G, all exhibit comparable and low copying cost. Fur-
thermore, Figure 8(b) shows that these 3 collectors achieve close to or
their minimums with a window size around 40% of the entire heap.
The OF collector (as simulated for this study) fails with a window
size below 20%, because long-lived data spans more than the collection
window [25]. These results are representative of the remaining 8
programs. Comparing 2G with 3G collection in Figure 8(a) and (b)
reveals no significant differences in the best configurations, but many
configurations of the 3G collector perform worse, sometimes much
worse, than the 2G collector.
4.1 Comparing 2 and 3 Generations
Several of the programs follow the trend we see in Figure 7(a) for
JavaBYTEmark, in which 3G copies fewer objects than 2G. Jav-
aBYTEmark is the program in our suite in which the 3G collector
enjoys the largest advantage over the 2G collector. The more detailed
presentation in Figure 7(b) reveals however that there are many configurations
of the 3G collector that the 2G collector outperforms. This
trend is true for the other programs as well, and demonstrates the difficulty
of configuring generations well. For the remaining 9 programs,
Toba, Bloat-Bloat, Lambda-Fact5, Lambda-Fact6, HeapSim, Swim,
Tomcatv, Tree-Replace-Binary, and Tree-Replace-Random, the 2G
collector copies the same amount or less than a 3G collector.
4.2 Comparing FC Collectors.
As it is demonstrated by JavaBYTEmark and Bloat-Bloat, the OF collector
usually copies significantly less data than the OO and YO collec-
tors. There are however a few programs for which the OO collector
performs the best: Tree-Replace-Random and Tree-Replace-Binary.
In these programs, there is very little long-lived data [25]. Random
replacement of random subtrees or the interior node connected to the
leaves of the binary tree does indeed imply that the longer the collector
waits the more likely an object will be garbage. However, such synthetic
programs are probably not representative of behaviors in users'
programs, and most programs do have some very long-lived data [10].
4.3 Conclusion.
The copying cost estimates show great promise for the Older-First
algorithm on a set of benchmarks. We therefore consider the issues
involved in an actual implementation, and then proceed to the evaluation
of a prototype. To simplify the investigation and the presenta-
tion, we will focus on the two-generation collector 2G (since we have
found that it is usually comparable to the three-generation one) and
the Older-First algorithm OF.
While OF collection reduces copying costs, it may increase write barrier
costs. This potential increase prompted us to consider carefully
which pointer stores need to be remembered in our prototype imple-
mentation. Generational collectors remember pointers from older to
younger generations, but not within generations. Thus, stores into the
youngest generation, including objects just allocated (in the nursery),
never need to be remembered. The corresponding rule for OF collection
is based on the following observation: when a store creates
a reference p ! q, then we need to remember it only if q might be
collected before p. Figure 9 shows diagrammatically which pointers
an OF collector must remember, according to their direction between
different regions of the heap. For example, the pointer store that creates
the pointer
\Theta
\Theta
q need not be remembered, because object
\Theta
will necessarily fall into the collected region earlier than
\Theta
q will.
oldest
region of next collection
youngest
allocation
direction
Figure
9: Directional filtering of pointer stores: crossed-out pointers
need not be remembered.
allocation copying
youngest oldest
region of
next collection
high addresses low addresses
Figure
10: Directional filtering with an address-ordered heap.
At first glance, it would appear complex and expensive to do the
filtering suggested by Figure 9, although not more than in flexible
generational collectors [15]. However, if we reorder the regions of the
heap physically as shown in Figure 10, then the test can be simpler
still: we need only test if the store creates a pointer in a particular direction
and possibly crossing a region boundary. A large zone of the
virtual address space is set aside for allocation from higher addresses
to lower. The collection region also moves from higher addresses to
lower, but lags behind the allocation; the survivors are evacuated into
the next similarly sized zone at lower addresses. If the collection region
catches up with allocation (equivalent to reaching the right end
in the logical layout of Figure 9), the former allocation zone is re-
leased, the former copying zone becomes the allocation zone, and a
new copying zone is acquired. The organization of Figure 10 is especially
attractive with a very large address space and with some co-operation
from the operating system, to acquire and release address
space as the heap progresses from higher to lower addresses.
Our implementation is based on allocating fixed-size blocks to the
various heap regions, with the collector constrained to collect an integral
number of blocks. This structure, with a block table, simply and
quickly maps from addresses to remembered sets.
Since the block size is a power of two, blocks are aligned by block
size, and the collection window moves from higher to lower addresses,
we essentially test if p < q:
if (p < (q & -mask))
remember p in q's remset;
Adjusting one of the pointers using the mask eliminates stores
within the same block. This test is important, since the vast majority
of stores are to nearby objects, and thus tend not to cross block boundaries
[25]. The directional test (<) also reduces the number of pointers
remembered.
This write barrier, then, filters stores inline so that out-of-line code
to remember a pointer is only executed for those cross-block pointers
where the source block of the pointer may be collected after its target
block. The test above also filters out stores of null pointers. In
essence, it is treating the null pointer value of 0 as referring to an object
that will never be collected, without the need for an additional
explicit test.
Assuming that p and q are in registers and that the mask fits in the
immediate field of an instruction, the above sequence requires only
three instructions: mask, compare, and conditional branch. On the
Alpha processor we indeed obtain such a sequence. The SPARC requires
an additional instruction to construct the mask, since the immediate
fields are too small for reasonable block sizes. One can dedicate
a register to hold the mask, and thereby reduce the sequence to three
instructions.
The slow path to remember a pointer at the write barrier consists
of the following: determine the target object's block (shift the address
right), index a block table (the base of which is in a register), load a
pointer into the block's remembered set, decrement the remembered
set pointer and check for underflow (explained in a moment), save the
pointer to be remembered, and store the decremented remembered set
pointer back into the block table. We organize each block's (genera-
tion's, in a generational collector) remembered set as a linked list of
chunks, where each chunk holds 15 remembered pointers in sequential
memory addresses. We allocate these chunks on aligned memory
boundaries, so the underflow test consists of checking if some low bits
of the remembered set pointer are all 0.
Garbage collection requires a space overhead for its auxiliary data
structures for pointer remembering; since our evaluation of the time
overhead is with respect to a given heap size, a fair comparison of
different collectors requires the space allowed each collector for ordinary
data to be diminished by the amount needed for auxiliary data
(which it is difficult to do a priori). In our study, OF collectors have
a greater space overhead than 2G because their pointer filtering is less
efficient. However, we measured the space overhead of OF on our
suite of benchmarks to be only 1% of heap size-therefore the consequent
time overheads are negligible.
6 EVALUATING TOTAL COLLECTION
COSTS
We evaluate our proposed collection algorithm and write barrier on
our benchmark suite using a combination of simulation and prototyp-
ing. We obtained heap traces (described in detail below) from program
runs in a Smalltalk and a Java virtual machine. These traces are independent
of the storage management scheme of the system from which
they were collected. For each collection algorithm we study, we process
the traces using a driver routine, which performs relevant actions
(such as object allocation and mutation) on objects in a heap. An
actual implementation of the particular collection algorithm manages
the heap. From this implementation, we obtain exact counts of various
relevant quantities, such as the number of objects copied, number
of bytes copied, and write barrier actions, which we use to estimate
execution times.
6.1 Obtaining Counts and Volumes
We now describe in more detail how we obtained the counts and volumes
we report in our results.
Traces. Our traces indicate each object allocation (with the size
of the object), each update of a pointer field of a heap object, and
each object "death" (an object dies when it ceases to be reachable).
Object death is precise-in the tracing system we perform a complete
garbage collection immediately before each object allocation,
and note in the trace the objects that have died since the previous allo-
cation. While this tracing technique is time-consuming, it does mean
that when we present the traces to any actual collection algorithm, we
will observe exactly the collection behavior we would have obtained
from the corresponding program (but without running the program).
Driver. The driver routine is straightforward in concept: it simply
reads and obeys each trace record, by taking appropriate action on the
prototype heap implementation. A key difference between the driver
and a live program is that, since our traces do not include manipulations
of local and global variables, the driver keeps a table (on the
side) of all live objects. When the driver processes an object death
record, it deletes the corresponding object from the table of live ob-
jects. From the point of view of the collector, the driver thus differs
from a live program only in that more objects are referred to directly
rather than reached only via other objects.
Prototype heap implementations and write barriers. All the
heap implementations share some common infrastructure. Each heap
consists of a collection of blocks, which are aligned, 2 k -byte portions
of memory. We varied the block size in some experiments. Each heap
also has remembered set data structures and write barriers appropriate
to that heap. For example, the generational heap uses a generational
comparison, whereas the OF heap uses the same-block and directional
filtering. We note that these implementations are highly instrumented,
so that we can tell how many pointer stores go down each filtering
path of each write barrier. Likewise, the collector cores are highly
instrumented to obtain accurate counts of copying actions. We do not
obtain wall-clock timings from these prototype heap implementations.
6.2 Estimating Execution Times
Pending a complete implementation, we carefully implemented the
write barriers and other actions and timed them. All code fragments
have the same advantages, i.e., they execute in tight loops with important
quantities in registers, so we argue that the ratio of their timings
gives a reasonable order-of-magnitude estimate of the ratio we would
expect in an actual implementation, even though the absolute values
of the timings are optimistic.
We used a 292 MHz Alpha 21164. We took a cycle count measurement
by running a piece of code, with and without the fragment
we wished to measure, for many iterations of a loop, then taking the
difference in times and dividing by the clock period.
Write barrier. Depending on the details of the loop in which we
embedded the barrier, the fast path took 1, 2, or 3 cycles, which we
expected since the original sequence is 3 instructions and the Alpha
has an issue width of 4 (i.e., the alignment matters). We use 2 cycles
in our estimates. Remembering a pointer on the slow path of the write
barrier takes an average of 11 cycles (including the original test, and
the time needed for chunk management on overflow). Finally, to fetch
a remembered set entry, examine the target object, and possibly start
to copy the object takes 13 cycles on average. Thus the total cost to
create and process a remembered set entry, exclusive of copying its
target object, is 24 cycles.
Copying timing. Object copying involves more than simply
copying some bytes from one place to another. One must also: decode
the object header, determine which fields of the object contain
pointers, and handle each one of those pointers, thus accomplishing
the transitive closure of the points-to relation in a breadth-first manner
[9]. Since our prototype heaps were slightly simplified from actual
language implementations (i.e., we did not deal with all special cases
that arise in Java, such as finalization and locks), any comparisons
are likely to underestimate copying cost, and thus underestimate the
benefits of OF.
We modelled the total copying and collection processing costs using
this equation:
Here the a are the costs per occurrence of each case and the n
are the number of times that case occurs. The subscript obj concerns
the number of objects processed, w the number of words copied, skp
the number of pointer fields skipped because they are null or do not
point into the collected region, and dup the number of pointers into
the collected region but to objects already copied. Note that when we
encounter a pointer to an object in the collected region but not yet
copied, we charge our cost of discovery to the copying of that object.
We measured the following values (for operation with all data
structures in primary cache): a
a cycles, and a cycles. As an aside, we note that
these costs indicate that copying the words is not a large component
of the cost of processing pointer-rich objects.
Given our instrumentation to gather counts (the n as well as the
number of times the different write barrier actions occur) and our careful
estimates of the times for the various collector and write barrier
operations, we can project cycle costs for each collection algorithm.
As previously mentioned, we would not claim that the difference in
predicted cycle counts would exactly match that in practice, but that
ratios of predicted cycle costs would be reliable to an order of magni-
tude. Put another way, if we predict a ratio of collection costs of 2:1
or more, then it would be surprising if an implementation showed an
inversion of costs of the schemes.
6.3 Results
We applied the block-based evaluator to our benchmark suite. We now
examine the resulting evaluation of the older-first and generational
collectors with the detailed cost model just described which takes into
account both copying and pointer-tracking costs.
Similar to the mark/cons ratio plots we examined in Section 4, the
plots of total cost in Figures 11-22 show the lowest total cost that each
collector can achieve, among all examined configurations for a given
heap size. The minimum heap size equals the maximum amount of
live data, and evaluated heap sizes range from 2 to 6 times that min-
imum. While pointer costs work in favor of the 2G and against the
OF collector, and diminish the advantages that OF enjoyed in the estimate
of copying costs in Section 4, nevertheless they do not succeed in
changing the qualitative relationship that we observed previously. On
one subset of benchmarks (JavaBYTEmark, StandardNonInteractive,
HeapSim, Lambda-Fact5, Lambda-Fact6,Richards) the OF collector
has a clear advantage, except with very small heap sizes. On the remaining
benchmarks, the performance of the two collectors is similar.
2e+06
200000 250000 300000 350000 400000
Total
cost
(cycles),
estimated
Heap size (words)
JavaBYTEmark
OF
Figure
11: Total collection cost, JavaBYTEmark.5e+071.5e+082.5e+08
200000 400000 600000 800000 1e+06 1.2e+06
Total
cost
(cycles),
estimated
Heap size (words)
Bloat-Bloat
OF
Figure
12: Total collection cost, Bloat-Bloat.5e+071.5e+08500000 1e+06 1.5e+06
Total
cost
(cycles),
estimated
Heap size (words)
Toba
OF
Figure
13: Total collection cost, Toba.2000006000001e+062000 3000 4000 5000 6000 7000 8000
Total
cost
(cycles),
estimated
Heap size (words)
StandardNonInteractive
OF
Figure
14: Total collection cost, StandardNonInteractive.2e+076e+071e+08100000 200000 300000 400000 500000 600000
Total
cost
(cycles),
estimated
Heap size (words)
OF
Figure
15: Total collection cost, HeapSim.1e+063e+065e+067e+069e+06
5000 10000 15000 20000 25000 30000 35000 40000
Total
cost
(cycles),
estimated
Heap size (words)
OF
Figure
Total collection cost, Lambda-Fact5.
1.5e+072.5e+0710000 20000 30000 40000 50000 60000 70000 80000 90000
Total
cost
(cycles),
estimated
Heap size (words)
OF
Figure
17: Total collection cost, Lambda-Fact6.5e+061.5e+072.5e+073.5e+074.5e+07
20000 40000 60000 80000 100000
Total
cost
(cycles),
estimated
Heap size (words)
OF
Figure
Total collection cost, Swim.5e+061.5e+072.5e+07
Total
cost
(cycles),
estimated
Heap size (words)
Tomcatv
OF
Figure
19: Total collection cost, Tomcatv.1e+063e+065e+067e+069e+06
10000 20000 30000 40000 50000 60000
Total
cost
(cycles),
estimated
Heap size (words)
Tree-Replace-Binary
OF
Figure
20: Total collection cost, Tree-Replace-Binary.5e+061.5e+072.5e+0720000 30000 40000 50000 60000 70000 80000 90000
Total
cost
(cycles),
estimated
Heap size (words)
Tree-Replace-Random
OF
Figure
21: Total collection cost, Tree-Replace-Random.1e+073e+075e+072000 3000 4000 5000 6000 7000 8000 9000 10000
Total
cost
(cycles),
estimated
Heap size (words)
Richards
OF
Figure
22: Total collection cost, Richards.
7 DISCUSSION
Comparing collectors. A straightforward comparison between OF
and 2G collectors shows that OF achieves lower total costs in many
cases. The main contributing factor is the reduction of copying cost;
the supporting factor is the containment of the increase of pointer-
tracking cost.
That copying costs can be markedly lower than with generational
collection, in a collector that scavenges areas other than the youngest,
is perplexing in the light of widely recognized good performance of
generational collectors. Nevertheless, it is entirely in accord with the
intuition that the very youngest objects are live, and to collect them
is wasteful. In generational collection there is a tension between the
need to increase the size of the nursery so as to reduce such wasteful
copying of young objects, and the need to increase the size of older
generations so that they are not collected frequently-a tension that
cannot be resolved in a heap of finite size. In contrast, Older-First
collection is able to focus on an age range where wasteful copying
is minimized, which results in good performance on those programs
where such a range prominently exists. Whereas our diagram in Figure
6 shows how this desirable behavior may arise, it is tempting to
consider how a designer could encourage it. For example, further improvements
may be achieved by dynamically (adaptively) choosing
the size of the collection window, and, more ambitiously, looking at
window motion policies more sophisticated than the one we have described
Pointer tracking. While an ever-increasing latitude in collection
policy may further reduce copying costs below those of generational
collection and the simple Older-First scheme, it will also be necessary
to keep the pointer-tracking costs within reason. The pointer-tracking
costs in OF, albeit high with respect to generational collection, are not
excessive, because its window motion policy allows efficient pointer
filtering. Any block-based collector can apply a filter to ignore pointer
stores that do not cross block boundaries; we found that filter to eliminate
about 60% of stores for reasonable configurations (note that
blocks cannot be arbitrarily large lest the collector degenerate into a
non-generational one). Directional filtering (Figure 9), ignores about
95% of stores: not as many as generational filtering, which ignores
about 99%, but enough that the cost for the remaining, remembered,
stores does not substantially offset the copying cost reduction.
As we developed our directional filtering scheme, we collected
statistics of pointer stores, according to the position, in an age-ordered
heap, of the pointer source and target (i.e., the object containing the
reference, and the referent object), which shed new light on some
long-held beliefs about the pointer structure of heaps. It has been
widely assumed that pointers tend to point from younger objects to
older ones. While this belief is surely justified for functional pro-
grams, it is not generally true of the object-oriented programs we ex-
amined. Both younger-to-older and older-to-younger directions are
well represented, neither dominant, in most of our benchmarks. The
supposed predominance of younger-to-older pointers is often cited as
cause and justification of the efficacy of generational pointer filter-
ing. A more faithful explanation arises from our observations: most
pointer stores are to objects that are very young, and they install
pointers to target objects that are also very young (whether relatively
younger or older than the source), and a generational filter ignores
these stores because they are between objects of the same generation.
Figure
provides an example: (a) in Bloat-Bloat, older-to-younger
pointers (negative age distances) account for 40% of the stores; how-
ever, the histogram of source positions (b) as well as that of target
positions (c) show that most stores establish pointers between very
young objects.
Caching and memory effects. Since copying collectors only
touch the live data, and leave untouched newly dead objects, collectors
that copy less should also have good locality. However, OF visits
the entire heap more regularly as compared to generational collectors,0.20.61
Cumulative
probability
log2(abs(distance)) * sgn(distance)
Bloat-Bloat
(a) Distribution of pointer age distances5000001.5e+062.5e+063.5e+060 20000 40000 60000 80000 100000 120000 140000
Histogram
Source position
Bloat-Bloat
(b) Distribution of pointer source ages5000001.5e+062.5e+063.5e+06
Histogram
Target position
Bloat-Bloat
(c) Distribution of pointer target ages
Figure
23: Pointer store heap position: Bloat-Bloat.
which may decrease its locality in the cache and increase its paging
activity. Clearly, we can only study these effects in the context of a
complete implementation, and we will do so in future work.
The overwhelming consensus in the studies on generational garbage
collection has been that a younger-first discipline should be used; i.e.,
that when the collector decides to examine one generation, it must at
the same time examine all younger generations. The scheme that we
introduce may be understood (if we ignore policy details), as similar
to requiring an older generation to be collected apart from younger
ones. This possibility is indeed mentioned, but dismissed both in Wil-
son's survey of garbage collection [32, p. 36] and in Jones and Lins'
monograph [17, p.151], the two most accessible sources on the state
of the art in uniprocessor garbage collection.
Generational garbage collection employs fixed boundaries between
generations, in order to minimize the pointer-tracking effort
needed for each such boundary. Barrett and Zorn explored the possibility
of using flexible generation boundaries (remaining however
within the youngest-first discipline), and found that the increase in
pointer-tracking effort need not be excessive [5]. Our OF scheme uses
flexible collection region boundaries, but we combine it with efficient
mechanisms to keep pointer-tracking costs in check, even without the
youngest-first discipline.
Clinger and Hansen proposed a collector for Scheme that does
not base collection decisions on object age, but rather on the time
elapsed since last collection [11], and focuses on objects for which
that time is longest. (There have been historical precursors to this
idea [2, 18, 6].) Although this algorithm is not age-based, it prompted
us to investigate similarly flexible age-based ones; in the context of
object-oriented languages that we examined, we found the latter to be
superior.
More generally, schemes have been suggested that divide the heap
into regions, not necessarily age-based, that can be collected independently
and/or incrementally. Bishop proposed such segregation
in accordance with the usage of objects [8], while Hudson and Moss's
mature object space algorithm (for managing very-long-lived data) introduced
policies that approximate the age-order criterion [16].
In garbage collection, there is an inherent trade-off between space
and time overheads, and there is a trade-off between reducing the total
time overhead and reducing the time of a single collection (for
incremental operation). Different authors have applied different measures
in their system evaluation. Our focus is on time overhead of
collection within given space constraints. Therefore, without making
specific comparisons, which are difficult when evaluation metrics
as well as underlying languages are widely different, we recognize
that our study draws on previous experience with generational
garbage collection implementations [19, 27, 20, 24, 28, 35], their policies
[29, 30, 31, 34, 1, 13], their write barrier mechanisms [33, 15, 14],
and their evaluation with respect to object allocation and lifetime behavior
[3, 26, 11].
Achieving performance improvements with generational collection
critically depends on setting or adapting the configuration parameters
right-incorrectly chosen generation sizes can cause performance
to degrade severely. We have confirmed these matters
in our observations of multi-generational collectors on our benchmark
traces. Choosing a good regime of generations is not an easy
task, and it is not yet fully understood despite numerous studies
[29, 36, 34, 1, 5]. However, we can also say that it is a matter of
tuning the performance within the class of youngest-only collection
schemes. Our goal in this study has not been to examine how to tune
a particular scheme, but instead to compare the schemes. Whether
optimal configurations can be chosen a priori, or how a system might
adaptively arrive at them are questions for separate investigation.
9
Generational collection achieves good performance by considering
only a portion of the heap at each collection. It achieves this good
performance even while imposing additional costs on the mutator,
namely a write barrier to track pointers from older to younger gen-
erations. We found that we can reduce copying costs further, in many
cases dramatically, by not including the youngest objects in each col-
lection, and we call this more general scheme age-based collection
since it still determines which objects to collect based on age. We considered
in detail a particular age-based algorithm that we term older-
first (OF) and found that it never needed to copy substantially more
data than generational collection, and copied up to ten times less for
some programs. OF does require more write barrier work than generational
collection, perhaps ten times more, but the savings in copying
can outweigh the extra pointer tracking costs.
We obtained these results with exact heap contents simulation,
prototype collector implementation, and careful timing of crucial code
fragments. Given the factor by which OF outperforms generational
collection-often a factor of 2 or more-it should also perform well
in actual implementation. Integration with a Java virtual machine is
in progress.
While improved performance is one measure of the significance
of this work, we also feel that it contributes substantially to our understanding
of memory usage and garbage collector behavior. Put
another way, garbage collection has a long tradition of study, yet we
have shown that the widely accepted state of the art, generational col-
lection, leaves considerable room for improvement.
We also question some of the widely held beliefs about generational
collection, offering new intuition. While we clearly agree with
the tenet that one should wait for objects to die before collecting them,
as it has been recognized in the considerable body of work concerning
the avoidance of early "tenuring" of objects, we show that it is practical
to avoid copying the very youngest objects and that doing so saves
much work, even though it imposes a heavier burden on the running
program. In the past the write barrier cost was thought too high to permit
exploring algorithms like OF. Now we have results encouraging
consideration of a wide range of new techniques.
Future work should include considering other window motion al-
gorithms, dynamically changing the window size, using multiple windows
(e.g., one for younger objects and one for mature objects as in
mature object space collection), and more experimentation and mea-
surement, for more programs, platforms, and languages.
Acknowledgements
. We acknowledge with gratitude the assistance
of David Detlefs and the Java Topics group with Sun Microsystems
Laboratories, Chelmsford, Massachusetts, in collecting and providing
traces for this work. We thank Margaret Martonosi and the anonymous
referees for valuable comments on drafts of this paper.
--R
Simple generational garbage collection and fast allocation.
List processing in real-time on a serial computer
'Infant Mortality' and generational garbage collec- tion
The Lambda Calculus: Its Syntax and Se- mantics
Garbage collection using a dynamic threatening boundary.
MALI: A memory with a real-time garbage collector for implementing logic programming languages
International Workshop on Memory Management (St.
Computer Systems with a Very Large Address Space and Garbage Collection.
A nonrecursive list compacting algorithm.
Generational stack collection and profile-driven pretenuring
Generational garbage collection and the radioactive decay model.
Key Objects in Garbage Collection.
Remembered sets can also play cards.
A comparative performance evaluation of write barrier implementa- tions
Incremental collection of mature objects.
Garbage Collection: Algorithms for Automatic Dynamic Memory Management.
Incremental incrementally compacting garbage collection.
Garbage collection in a large Lisp system.
Pizza into Java: translating theory into practice.
Java for applications
A lifetime-based garbage collector for LISP systems on general-purpose computers
Properties of Age-Based Automatic Memory Reclamation Algorithms
Characterisation of object behaviour in Standard ML of New Jersey.
Generation scavenging: A non-disruptive high performance storage reclamation algorithm
The Design and Evaluation of a High Performance Smalltalk System.
Tenuring policies for generation-based storage reclamation
An adaptive tenuring policy for generation scavengers.
A simple bucket-brigade advancement mechanism for generation-based garbage collection
Uniprocessor garbage collection techniques.
"card-marking"
Design of the opportunistic garbage collector.
Barrier methods for garbage collection.
Comparative Performance Evaluation of Garbage Collection Algorithms.
--TR
Smalltalk-80: the language and its implementation
Incremental incrementally compacting garbage collection
The design and evaluation of a high performance Smalltalk system
Tenuring policies for generation-based storage reclamation
A simple bucket-brigade advancement mechanism for generation-bases garbage collection
A MYAMPERSANDldquo;card-markingMYAMPERSANDrdquo; scheme for controlling intergenerational references in generation-based garbage collection on stock hardware
Simple generational garbage collection and fast allocation
Design of the opportunistic garbage collector
An adaptive tenuring policy for generation scavengers
A comparative performance evaluation of write barrier implementation
Infant mortality and generational garbage collection
Key objects in garbage collection
Characterization of object behaviour in Standard ML of New Jersey
Garbage collection using a dynamic threatening boundary
Garbage collection
Generational garbage collection and the radioactive decay model
Pizza into Java
Generational stack collection and profile-driven pretenuring
A real-time garbage collector based on the lifetimes of objects
List processing in real time on a serial computer
A nonrecursive list compacting algorithm
Memory Management
Incremental Collection of Mature Objects
Uniprocessor Garbage Collection Techniques
Garbage collection in a large LISP system
Generation Scavenging
Comparative Performance Evaluation of
Properties of age-based automatic memory reclamation algorithms
--CTR
Feng Xian , Witawas Srisa-an , Hong Jiang, Service oriented garbage collection: improving performance and robustness of application servers, Companion to the 21st ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications, October 22-26, 2006, Portland, Oregon, USA
Stephen M. Blackburn , John Cavazos , Sharad Singhai , Asjad Khan , Kathryn S. McKinley , J. Eliot B. Moss , Sara Smolensky, Profile-driven pretenuring for Java (poster session), Addendum to the 2000 proceedings of the conference on Object-oriented programming, systems, languages, and applications (Addendum), p.129-130, January 2000, Minneapolis, Minnesota, United States
Richard Jones, Five perspectives on modern memory management: systems, hardware and theory, Science of Computer Programming, v.62 n.2, p.95-97, 1 October 2006
Matthew Hertz , Stephen M Blackburn , J Eliot B Moss , Kathryn S. McKinley , Darko Stefanovi, Error-free garbage collection traces: how to cheat and not get caught, ACM SIGMETRICS Performance Evaluation Review, v.30 n.1, June 2002
Darko Stefanovi , Matthew Hertz , Stephen M. Blackburn , Kathryn S. McKinley , J. Eliot B. Moss, Older-first garbage collection in practice: evaluation in a Java Virtual Machine, ACM SIGPLAN Notices, v.38 n.2 supplement, p.25-36, February
Stephen M. Blackburn , Sharad Singhai , Matthew Hertz , Kathryn S. McKinely , J. Eliot B. Moss, Pretenuring for Java, ACM SIGPLAN Notices, v.36 n.11, p.342-352, 11/01/2001
Narendran Sachindran , J. Eliot , B. Moss, Mark-copy: fast copying GC with less space overhead, ACM SIGPLAN Notices, v.38 n.11, November
D. Clinger , Fabio V. Rojas, Linear combinations of radioactive decay models for generational garbage collection, Science of Computer Programming, v.62 n.2, p.184-203, 1 October 2006
Stephen M Blackburn , Kathryn S. McKinley, In or out?: putting write barriers in their place, ACM SIGPLAN Notices, v.38 n.2 supplement, February
Stephen M. Blackburn , Antony L. Hosking, Barriers: friend or foe?, Proceedings of the 4th international symposium on Memory management, October 24-25, 2004, Vancouver, BC, Canada
Stephen M. Blackburn , Matthew Hertz , Kathryn S. Mckinley , J. Eliot B. Moss , Ting Yang, Profile-based pretenuring, ACM Transactions on Programming Languages and Systems (TOPLAS), v.29 n.1, p.2-es, January 2007
Lars T. Hansen , William D. Clinger, An experimental study of renewal-older-first garbage collection, ACM SIGPLAN Notices, v.37 n.9, p.247-258, September 2002
Stephen M Blackburn , Richard Jones , Kathryn S. McKinley , J Eliot B Moss, Beltway: getting around garbage collection gridlock, ACM SIGPLAN Notices, v.37 n.5, May 2002
Stephen M. Blackburn , Perry Cheng , Kathryn S. McKinley, Oil and Water? High Performance Garbage Collection in Java with MMTk, Proceedings of the 26th International Conference on Software Engineering, p.137-146, May 23-28, 2004
David Detlefs , Christine Flood , Steve Heller , Tony Printezis, Garbage-first garbage collection, Proceedings of the 4th international symposium on Memory management, October 24-25, 2004, Vancouver, BC, Canada
Samuel Z. Guyer , Kathryn S. McKinley, Finding your cronies: static analysis for dynamic object colocation, ACM SIGPLAN Notices, v.39 n.10, October 2004
Martin Hirzel , Johannes Henkel , Amer Diwan , Michael Hind, Understanding the connectivity of heap objects, ACM SIGPLAN Notices, v.38 n.2 supplement, February
Matthew Hertz , Stephen M. Blackburn , J. Eliot B. Moss , Kathryn S. McKinley , Darko Stefanovi, Generating object lifetime traces with Merlin, ACM Transactions on Programming Languages and Systems (TOPLAS), v.28 n.3, p.476-516, May 2006
Martin Hirzel , Amer Diwan , Matthew Hertz, Connectivity-based garbage collection, ACM SIGPLAN Notices, v.38 n.11, November
Exploiting prolific types for memory management and optimizations, ACM SIGPLAN Notices, v.37 n.1, p.295-306, Jan. 2002
David F. Bacon , Perry Cheng , V. T. Rajan, A unified theory of garbage collection, ACM SIGPLAN Notices, v.39 n.10, October 2004
Matthew Hertz , Yi Feng , Emery D. Berger, Garbage collection without paging, ACM SIGPLAN Notices, v.40 n.6, June 2005
Godmar Back , Wilson C. Hsieh, The KaffeOS Java runtime system, ACM Transactions on Programming Languages and Systems (TOPLAS), v.27 n.4, p.583-630, July 2005 | object behavior;write barrier;garbage collection;generational and copy collection |
320426 | Software-Directed Register Deallocation for Simultaneous Multithreaded Processors. | AbstractThis paper proposes and evaluates software techniques that increase register file utilization for simultaneous multithreading (SMT) processors. SMT processors require large register files to hold multiple thread contexts that can issue instructions out of order every cycle. By supporting better interthread sharing and management of physical registers, an SMT processor can reduce the number of registers required and can improve performance for a given register file size. Our techniques specifically target register deallocation. While out-of-order processors with register renaming are effective at knowing when a new physical register must be allocated, they have limited knowledge of when physical registers can be deallocated. We propose architectural extensions that permit the compiler and operating system to: immediately upon their last use, and registers allocated to idle thread contexts. Our results, based on detailed instruction-level simulations of an SMT processor, show that these techniques can increase performance significantly for register-intensive, multithreaded programs. | Introduction
Simultaneous multithreading (SMT) is a high-performance architectural technique that
substantially improves processor performance by executing multiple instructions from multiple
threads every cycle. By dynamically sharing processor resources among threads, SMT increases
functional unit utilization, thereby boosting both instruction throughput for multiprogrammed
workloads and application speedup for multithreaded programs [5].
Previous research has looked at the performance potential of SMT [24], as well as several
portions of its design, including instruction fetch mechanisms and cache organization [23][13].
This paper focuses on another specific design area that impacts SMT's cost-effectiveness: the
organization and utilization of its register file. SMT raises a difficult tradeoff for register file
design: while a large register file is required to service the architectural and renaming register
needs of the multiple thread contexts, smaller register files provide faster access times.
Therefore, an SMT processor needs to use its register resources efficiently in order to optimize
both die area and performance.
In this paper, we propose and evaluate software techniques that increase register utilization,
permitting a smaller, faster register file, while still satisfying the processor's need to support
multiple threads. Our techniques involve coordination between the operating system, the
compiler, and the low-level register renaming hardware to provide more effective register use
for both single-threaded and multithreaded programs. The result is improved performance for a
given number of hardware contexts and the ability to handle more contexts with a given number
of registers. For example, our experiments indicate that an 8-context SMT processor with 264
physical registers, managed with the techniques we present, can attain performance comparable
to a processor with 352 physical registers.
Our techniques focus on supporting the effective sharing of registers in an SMT processor,
using register renaming to permit multiple threads to share a single global register file. In this
way, one thread with high register pressure can benefit when other threads have low register
demands. Unfortunately, existing register renaming techniques cannot fully exploit the potential
of a shared register file. In particular, while existing hardware is effective at allocating physical
registers, it has only limited ability to identify register deallocation points; therefore hardware
must free registers conservatively, possibly wasting registers that could be better utilized.
We propose software support to expedite the deallocation of two types of dead registers: (1)
registers allocated to idle hardware contexts, and (2) registers in active contexts whose last use
has already retired. In the first case, when a thread terminates execution on a multithreaded
architecture, its hardware context becomes idle if no threads are waiting to run. While the
registers allocated to the terminated thread are dead, they are not freed in practice, because
hardware register deallocation only occurs when registers in a new, active thread are mapped.
This causes a potentially-shared SMT register file to behave like a partitioned collection of per-thread
registers. Our experiments show that by notifying the hardware of OS scheduling
decisions, performance with a register file of size 264 is boosted by more than 3 times when 2 or
4 threads are running, so that it is comparable to a processor with 352 registers.
To address the second type of dead registers, those in active threads, we investigate five
mechanisms that allow the compiler to communicate last-use information to the processor, so
that the renaming hardware can deallocate registers more aggressively. Without this
information, the hardware must conservatively deallocate registers only after they are redefined.
Simulation results indicate that these mechanisms can reduce register deallocation
inefficiencies; in particular, on small register files, the best of the schemes attains speedups of
up to 2.5 for some applications, and 1.6 on average. All the register deallocation schemes could
benefit any out-of-order processor, not just SMT.
The remainder of this paper is organized as follows. Section 2 briefly summarizes the SMT
architecture and register renaming inefficiencies. Our experimental methodology is described in
Section 3. Section 4 describes the OS and compiler support that we use to improve register
usage. We discuss related work in Section 5 and offer concluding remarks in Section 6.
Simultaneous Multithreading
Our SMT processor model is similar to that used in previous studies: an eight-wide, out-of-
order processor with hardware contexts for eight threads. On every cycle four instructions are
fetched from each of two threads. The fetch unit favors high throughput threads, selecting the
two threads that have the fewest instructions waiting to be executed. After fetching, instructions
are decoded, their registers are renamed, and they are inserted into either the integer or floating
point instruction queues. When their operands become available, instructions (from any thread)
are issued to the functional units for execution. Finally, instructions are retired in per-thread
order.
Most components of an SMT processor are an integral part of any dynamically-scheduled,
wide-issue superscalar. Instruction scheduling is an important case in point: instructions are
issued after their operands have been calculated or loaded from memory, without regard to
thread; the register renaming hardware eliminates inter-thread register name conflicts by
mapping thread-specific architectural registers onto the processor's physical registers.
The major additions to a conventional superscalar are the instruction fetch unit mentioned
above and several per-thread mechanisms, such as program counters, return stacks, retirement
and trap logic, and identifiers in the TLB and branch target buffer. The register file contains
register state for all processor-resident threads and consequently requires two additional pipeline
stages for accessing it (one each for reading and writing). (See [23] for more details.)
2.1 Register Renaming and the Register Deallocation Problem
Register renaming eliminates false (output and anti-) dependences that are introduced when
the compiler's register allocator assigns an arbitrary number of pseudo-registers to the limited
number of architectural registers in the instruction set architecture. Dependences are broken by
dynamically aliasing each defined architectural register to a different physical register, enabling
formerly dependent instructions to be executed in parallel.
SMT assumes a register mapping scheme similar to that used in the DEC 21264 [8] and
MIPS R10000 [27]. The register renaming hardware is responsible for three primary functions:
(1) physical register allocation, (2) register operand renaming, and (3) register deallocation.
Physical register allocation occurs on demand. When an instruction defines an architectural
register, a mapping is created from the architectural register to an available physical register and
is entered into the mapping table. If no registers are available, instruction fetching stalls. To
rename a register operand, the renaming hardware locates its architectural-to-physical mapping
in the mapping table and aliases it to its physical number. Register deallocation works in
conjunction with instruction retirement. An active list keeps track of all uncommitted
instructions in per-thread program order. As instructions retire, their physical registers are
deallocated and become available for reallocation.
Renaming hardware handles physical register allocation and renaming rather effectively, but
fails to manage deallocation efficiently. A register is dead and could be deallocated once its last
use commits. The hardware, however, cannot identify the last uses of registers, because it has no
knowledge of register lifetimes. Consequently, hardware can only safely deallocate a physical
register when it commits another instruction that redefines its associated architectural register, as
shown in Figure 1.
2.2 Physical Register Organization and the Register Deallocation Problem
In fine-grained multithreaded architectures like the Tera [1], each hardware context includes
a register file for one thread, and a thread only accesses registers from its own context, as shown
in Figure 2a. 1 In contrast, in an SMT processor, a single register file can be shared among all
contexts (Figure 2b). We call this organization FSR (Fully-Shared Registers), because the
register file is structured as a single pool of physical registers and holds the state of all resident
threads. SMT's register renaming hardware is essentially an extension of the register mapping
scheme to multiple contexts. Threads name architectural registers from their own context, and
the renaming hardware maps these thread-private, architectural registers to the pool of thread-
independent physical registers. Register renaming thus provides a transparent mechanism for
sharing the register pool.
Although an SMT processor is best utilized when all hardware contexts are busy, some
contexts may occasionally be idle. To maximize performance, no physical registers should be
allocated to idle contexts; instead, all physical registers should be shared by the active threads.
However, with existing register deallocation schemes, when a thread terminates, its architectural
registers remain allocated in the processor until they are redefined by a new thread executing in
the context. Consequently, the FSR organization behaves more like a partitioned file, as shown
in Figure 2c. (We call this partitioned organization PASR, for Private Architectural and Shared
Renaming registers.) Most ISAs have architectural registers; consequently, thirty-two
physical registers must be dedicated to each context in a PASR scheme. So, for example, on an
eight-context SMT with 352 registers, only 96 (352-8*32) physical registers are available for
sharing among the active threads.
1. Note that we are discussing different logical organizations for the register file. How the file is physically
structured is a separate issue.
3 addl r20,r21,r12
Figure
1: This example illustrates the inability of the renaming hardware to
efficiently deallocate the physical register for r20. (The destination registers are
italicized). Instruction 1 defines r20, creating a mapping to a physical register, say
P1. Instruction 3 is the last use of r20. However, P1 cannot be freed until r20 is
redefined in instruction n. In the meantime, several instructions and potentially a
large number of cycles can pass between the last use of P1 (r20) and its
deallocation.
ThreadThreadThreadThread(a)
Figure
2: Logical register file configurations: (a) is a Tera-style, partitioned register file; (b) is an SMT register file in which
all threads share a common pool of physical registers; (c) is an SMT register file, given current register deallocation schemes:
each hardware context has dedicated physical registers for the ISA-defined architectural registers and only the renaming
registers are shared across all contexts.
(b) (c)
ThreadThreadThreadThreadRenaming
registers
Architectural
registers
3 Methodology for the Experiments
We have defined several register file management techniques devised to compensate for
conservative register deallocation, and evaluated them using instruction-level simulation of
applications from the SPEC 95 [20] and SPLASH-2 [26] benchmark suites (Table 1). The SUIF
compiler [9] automatically parallelized the SPEC benchmarks into multithreaded C code; the
programs were already explicitly parallelized by the programmer. All programs
were compiled with the Multiflow trace-scheduling compiler [14] into DEC Alpha object files.
(Multiflow generates high-quality code, using aggressive static scheduling for wide issue, loop
unrolling, and other ILP-exposing optimizations.) The object files were then linked with our
versions of the ANL [2] and SUIF runtime libraries to create executables.
Our SMT simulator processes unmodified Alpha executables and uses emulation-based,
instruction-level simulation to model in detail the processor pipelines, hardware support for out-
of-order execution, and the entire memory hierarchy, including the TLBs (128 entries each for
instruction and data TLBs), cache behavior, and bank and bus contention. The memory
hierarchy in our processor consists of two levels of cache, with sizes, latencies, and bandwidth
characteristics, as shown in Table 2. Because register file management is affected by memory
Application data set instructions simulated
SPEC 95 FP applu 33x33x33 array, 2 iterations 271.9 M
iterations 473.5 M
mgrid 64x64x64 grid, 1 iteration 3.193 B
su2cor 16x16x16x16, vector length 4096, 2 iterations 5.356 B
iterations 419.1 M
tomcatv 513x513 array, 5 iterations 189.1 M
turb3d
data points 32.0 M
LU 512x512 matrix 431.2 M
water-nsquared 512 molecules, 3 timesteps 869.9 M
water-spatial 512 molecules, 3 timesteps 783.5 M
Table
1: Benchmarks used in this study. For the SPEC95 applications, our data sets are the same size as the SPEC
reference set, but we have reduced the number of iterations because of the length of simulation time.
L1 I-cache L1 D-cache L2 cache
Size (bytes) 128 K /
Associativity two-way two-way direct- mapped
Line size (bytes) 64 64 64
Banks
Cache fill time (cycles)
Latency to next level
Table
2: Configuration and latency parameters of the SMT cache hierarchies used in this study.
latencies, 1 we experimented with two different memory hierarchies. The larger memory
configuration represents a probable SMT memory hierarchy for machines in production
approximately 3 years in the future. The smaller configuration serves two purposes: (1) it
models today's memory hierarchies, as well as those of tomorrow's low-cost processors, such as
multimedia co-processors, and (2) it provides a more appropriate ratio between data set and
cache size, modeling programs with larger data sets or data sets with less data locality than those
in our benchmarks [19].
We also examined a variety of register file sizes, ranging between 264 and 352, to gauge the
sensitivity of the register file management techniques to register size. With more than 352
registers, other processor resources, such as the instruction queues, become performance
bottlenecks. At the low end, at least 256 registers are required to hold the architectural registers
for all eight contexts, 2 and we provide an additional 8 renaming registers for a total of 264.
Smaller register files are attractive for several reasons. First, they have a shorter access time; this
advantage could be used either to decrease the cycle time (if register file access is on the critical
path) or to eliminate the extra stages we allow for register reading and writing. Second, they
take up less area. Register files in current processors occupy a negligible portion (roughly 1%)
of the chip area, but a large, multi-ported SMT register file could raise that to around 10%, an
area allocation that might not be acceptable. Third, smaller register files consume less power.
For branch prediction, we used a McFarling-style hybrid predictor with a 256-entry, 4-way
set-associative branch target buffer, and a hybrid predictor (8k entries) that selects between a
global history predictor (13 history bits) and a local predictor (a 2k-entry local history table that
indexes into a 4k-entry, 2-bit local prediction table) [16].
Because of the length of the simulations, we limited our detailed simulation results to the
parallel computation portion of the applications (the norm for simulating parallel applications).
For the initialization phases of the applications, we used a fast simulation mode that warmed the
caches, and then turned on the detailed simulation mode once the main computation phases were
reached.
4 Techniques for improving register file management
Despite its flexible organization, an SMT register file will be underutilized, because
renaming hardware fails to deallocate dead registers promptly. In this section, we describe
communication mechanisms that allow the operating system and the compiler to assist the
renaming hardware with register deallocation, by identifying dead registers that belong to both
idle and active contexts.
4.1 Operating system support for dead-register deallocation
As explained in Section 2.2, when an executing thread terminates, the thread's physical
registers remain allocated. Consequently, active threads cannot access these registers, causing a
fully-shared register file (FSR) to behave like one in which most of the registers are partitioned
by context (PASR).
After a thread terminates, the operating system decides what to schedule on the newly-available
hardware context. There are three options, each of which has a different implication
1. Smaller caches increase miss rates, and because more latencies have to be hidden, register pressure
increases. The opposite is true for larger caches.
2. in the absence of mechanisms to avoid or detect and recover from deadlock.
for register deallocation:
1. Idle contexts: If there are no new threads to run, the context will be idle. The terminated
thread's physical registers could be deallocated, so that they become available to active
threads.
2. Switching to a new thread: Physical registers for a new thread's architectural registers are
normally allocated when it begins execution. A more efficient scheme would free the
terminated threads's physical registers, allocating physical registers to the new thread on
demand. Unallocated physical registers would then be available to other contexts.
3. Switching to a swapped-out thread: Context switch code loads the register state of the new
thread. As these load instructions retire, physical registers used by the terminated thread are
deallocated.
All three scenarios present an opportunity to deallocate a terminated thread's physical
registers early. We propose a privileged, context deallocation instruction (CDI) that triggers
physical register deallocation for a thread. The operating system scheduler would execute the
instruction in the context of the terminated thread. In response, the renaming hardware would
free the terminating thread's physical registers when the instruction retires.
Three tasks must be performed to handle the context deallocation instruction: creating a new
map table, invalidating the context's register mappings, and returning the registers to the free
list. When a CDI enters the pipeline, the current map table is saved and a new map table with no
valid entries is created; the saved map table identifies the physical registers that should be
deallocated, while the new table will hold subsequent register mappings. Once the CDI retires,
the saved map is traversed, and all mapped physical registers are returned to the free list.
Finally, all entries in the saved map are invalidated. If the CDI is executed on a wrong-path and
consequently gets squashed, both the new and saved map tables are thrown away.
Much of the hardware required for these three tasks already exists in out-of-order processors
with register mapping. When a branch enters the pipeline, a copy of the map table is created;
when the branch is resolved, one of the map tables is invalidated, depending on whether the
speculation was correct. If instructions must be squashed, the renaming hardware traverses the
active list (or some other structure that identifies physical registers) to determine which physical
registers should be returned to the free list. Although the CDI adds a small amount of logic to
existing renaming hardware, it allows the SMT register file to behave as a true FSR register file,
instead of a PASR by deallocating registers more promptly.
Experimental results
To evaluate the performance of the fully-shared register organization (FSR), we varied the
number of active threads and register set sizes, and compared it to PASR with identical
configurations. We modeled an OS scheduler that frees all physical registers for terminated
threads, by making all physical registers available when a parallel application began execution.
The results of this comparison are shown in Figure 3. With PASR (Figure 3a), only
renaming registers are shared among threads. Execution time therefore was greater for smaller
register files and larger numbers of threads, as more threads competed for fewer registers. FSR,
shown in Figure 3b, was less sensitive to both parameters. In fact, the smaller register files had
the same performance as larger ones when few threads were executing, because registers were
not tied up by idle contexts. Except for the smallest configuration, FSR performance was stable
with varying numbers of threads, because the parallelism provided by additional threads
overcame the increased competition for registers; only the 264-register file had a performance
sweet spot.
The speedups in Figure 4 show that FSR equals or surpasses PASR for all register file sizes
and numbers of threads. FSR provides the greatest benefits when it has more registers to share
(several idle contexts) and PASR has fewer (small register files). For example, with 320
registers and 4 idle contexts (4 threads), FSR outperformed PASR by 8%, averaged over all
applications. With only 288 or 264 registers, FSR's advantage grew to 34% and 205%, and with
6 idle contexts (and 320 registers) to 15%. Taking both factors into account (288/264 registers, 6
idle contexts), FSR outperformed PASR by 51%/232%. Only when all contexts were active
were FSR and PASR comparable; in this case the architectural state for all threads is resident in
both schemes.
FSR has a larger performance edge with smaller cache hierarchies, because hiding the longer
memory latencies requires more in-flight instructions, and therefore more outstanding registers.
This suggests that efficient register management is particularly important on memory-intensive
workloads or applications with relatively poor data locality.
In summary, the results illustrate that partitioning a multithreaded register file (PASR)
restricts its ability to expose parallelism. Operating system support for deallocating registers in
idle contexts, which enables the register file to be fully shared across all threads (FSR), both
improves performance, and makes it less dependent on the size of the register file and the
number of active threads.
4.2 Compiler support for dead-register allocation
As previously described, hardware register deallocation is inefficient, because the hardware
Number of threads1.03.0
Normalized
execution
time
(a) PASR
Figure
3: Execution time for FSR and PASR with the larger memory hierarchy. Each register file organization is
normalized to its 352 register, 1 thread execution. Results with the smaller SMT memory hierarchy had identical trends.288352
Number of threads1.03.0
Normalized
execution
time
(b) FSR
Number of registers
over
PASR
(b)
over
PASR
(a)
threads
4 threads
8 threads
Figure
4: FSR speedups over PASR for the larger (a) and smaller (b) memory hierarchies, at different register file sizes.
Number of registers Number of registers
(a) (b)
has knowledge only of a register's redefinition, not its last use. Although the compiler can
identify the last use of a register, it currently has no means for communicating this information
to the hardware.
In this section, we describe and evaluate several mechanisms that allow the compiler to
convey register last-use information to the hardware, and show that they improve register
utilization on SMT processors with an FSR register file organization. The proposed mechanisms
are either new instructions or fields in existing instructions that direct the renaming hardware to
First, however, we examine three factors that motivate the need for improved register
deallocation: (1) how often physical registers are unavailable, (2) how many registers are dead
each cycle, and (3) how many cycles pass between a register's last use and its redefinition,
which we call the dead-register distance. Register unavailability is the percentage of total
execution cycles in which the processor runs out of physical registers (causing fetch stalls); it is
a measure of the severity of the problem caused by current hardware register-deallocation
mechanisms. The average number of dead registers each cycle indicates how many physical
registers could be reused, and thus the potential for a compiler-based solution. Dead-register
Number
of
registers
integer FP
applu hydro2d swim tomcatv fft LU radix water-n applu hydro2d swim tomcatv fft LU radix water-n
Large Cache Hierarchy
3.1
3.1
Small Cache Hierarchy
288 2.6 2.1 5.3
264 2.2 2.5 44.2 0.2 64.9 22.7 93.4 0.7 88.1 87.2 2.7 94.0 8.5 6.7 0.0 74.7
Table
3: Frequency (percentage of total execution cycles) that no registers were available when executing 8 threads. Bold entries
(frequencies over 10%) represent severe stalling due to insufficient registers.
Number
of
registers
integer FP
applu hydro2d swim tomcatv fft LU radix water-n applu hydro2d swim tomcatv fft LU radix water-n
Large Cache Hierarchy
288
Small Cache Hierarchy
288 74 58
Table
4: Average number of dead registers per cycle when executing 8 threads. Bold entries are those where no registers were
available more than 10% of execution cycles.
distance measures the average number of cycles between the completion of an instruction that
last uses a register and that register's deallocation; it is a rough estimate of the likely
performance gain of a solution.
The data in Table 3 indicate that, while the projected SMT design (352 registers in an FSR
file) is sufficient for most applications, smaller register files introduce bottlenecks, often severe,
on many applications. (Register pressure was particularly high for integer registers in fft and
radix, and for floating-point registers in applu, hydro2d, tomcatv, and water-n.) Applications
also ran out of registers more frequently with smaller cache hierarchies. A closer examination
reveals that in all cases where stalling due to insufficient registers was a problem (bold entries in
Table
3), a huge number of registers were dead (shown in Table 4). Table 5 shows that if these
dead registers had been freed, they could have been reallocated many instructions/cycles earlier.
All this suggests that, if registers were managed more efficiently, performance could be
recouped and even a 264-register FSR might be sufficient.
Five compiler-based solutions
Using dataflow analysis, the compiler can identify the last use of a register value. In this
section, we evaluate five alternatives for communicating last-use information to the renaming
hardware:
Number
of registers applu hydro2d swim tomcatv fft LU radix water-n average
int instrs 57.6 59.1 32.3 67.2 30.7 56.9 27 32.7 47.2
int cycles 214.6 155.4 27.8 225.7 89.9 85.6 80 215.4 125.5
FP instrs 18.4 30.9 11.7 22.6 20.4 7.1 32.7 18.5
FP cycles 97.1 157.4 28.4 120.0 65.7 22.4 133.7 81.8
Table
5: Dead register distance for 264 registers and the smaller cache hierarchy. The data indicate that registers are frequently not
deallocated until many cycles after their last use has retired. Figures for other register sizes were similar. Bold entries are those
where no registers were available more than 10% of execution cycles.
ldl
ldl
addl r20,r21,r24
addl r21,0x1,r21
stl
ldl
a) base
ldl r20,addr1(r22)
ldl
addl r20,r21,r24
addl r21,0x1,r21
stl r12,addr3(r21)
ldl r20,addr4(r25)
lda r25,0x1000(r31)
;free int regs identified
;by mask
(c) Free Mask
ldl r20,addr1(r22)
ldl
addl r20,r21,r24
addl r21,0x1,r21
stl r24,addr3(r21)
ldl
b) Free Register
Figure
5: These code fragments illustrate the register freeing mechanisms: a) is the original code fragment; b) shows the
Free Register instructions necessary to free registers r12 to r25, c) is the Free Mask instructions necessary to free the same
registers
1. Free Register Bit communicates last-use information to the hardware via dedicated
instruction bits, with the dual benefits of immediately identifying last uses and requiring no
instruction overhead. Although it is unlikely to be implemented, because most instruction
sets do not have two unused bits, it can serve as an upper bound on performance
improvements that can be attained with the compiler's static last-use information. To
simulate Free Register Bit, we modified Multiflow to generate a table, indexed by the PC,
that contains flags indicating whether either of an instruction's register operands were last
uses. On each simulated instruction, the simulator performed a lookup in this table to
determine whether register deallocation should occur when the instruction is retired.
2. Free Register is a more realistic implementation of Free Register Bit. Rather than
specifying last uses in the instruction itself, it uses a separate instruction to specify one or
two registers to be freed. Our compiler generates a Free Register instruction (an unused
opcode in the Alpha ISA) immediately after any instruction containing a last register use (if
the register is not also redefined by the same instruction). Like Free Register Bit, it frees
registers as soon as possible, but with an additional cost in dynamic instruction overhead.
3. Free Mask is an instruction that can free up to 32 registers, and is used to deallocate dead
registers over a large sequence of code, such as a basic block or a set of basic blocks. For our
experiments, we inserted a Free Mask instruction at the end of each Multiflow trace. Rather
than identifying dead registers in operand specifiers, the compiler generates a bit mask. In
our particular implementation, the Free Mask instruction uses the lower 32-bits of a register
as a mask to indicate which registers can be deallocated. The mask is generated and loaded
into the register using a pair of lda and ldah instructions, each of which has a 16-bit
immediate field. (The example in Figure 5 compares Free Register with Free Mask for a
code fragment that frees integer registers 20 through 25.) Free Mask sacrifices the
promptness of Free Register's deallocation for a reduction in instruction overhead.
4. Free Opcode is motivated by our observation that 10 opcodes were responsible for 70% of
the dynamic instructions with last use bits set, indicating that most of the benefit of Free
Register Bit could be obtained by providing special versions of those opcodes. In addition to
executing their normal operation, the new instructions also specify that either the first,
second, or both operands are last uses. In this paper, we use the 15 opcodes listed in Table 6,
obtained by profiling Free Register Bit instruction frequencies on applu, hydro2d and
tomcatv. 1 Retrofitting these 15 instructions into an existing ISA should be feasible; for
example, all can be added to the DEC Alpha ISA, without negatively impacting instruction
decoding.
5. Free Opcode/Mask augments Free Opcode by generating a Free Mask instruction at the end
of each trace. This hybrid scheme addresses register last uses in instructions that are not
covered by our particular choice of instructions for Free Opcode.
Current renaming hardware provides mechanisms for register deallocation (i.e., returning
physical registers to the free register list) and can perform many deallocations each cycle. For
example, the Alpha 21264 deallocates up to 13 registers each cycle to handle multiple
1. We experimented with between 10 and 22 Free Opcode instructions. The additional opcodes after the top
15 tended to occur frequently in only one or two applications, and using them brought limited additional
benefits (exceptions were swim and radix).
instruction retirement or squashing. All five proposed register deallocation techniques use a
similar mechanism. Free Mask is slightly more complex, because it can specify up to
registers; in this case deallocation could take multiple cycles if necessary. (In our experiments,
however, only 7.2 registers, on average, were freed by each mask.)
The five register deallocation schemes are compared in Figure 6, which charts their speedup
versus no explicit register deallocation. The Free Register Bit bars show that register
deallocation can (potentially) improve performance significantly for small register files (77% on
average, but ranging as high as 195%). The Free Register Bit results highlight the most
attractive outcome of register deallocation: by improving register utilization, an SMT processor
with small register files can achieve large register file performance, as shown in Figure 7. The
significance of this becomes apparent in the context of conventional register file design. Single-
threaded, out-of-order processors often double their registers to support greater degrees of
parallelism (e.g., the R10000 has 64 physical registers, the 21264 has 80). With multiple register
contexts, an SMT processor need not double its architectural registers if they are effectively
shared. Our results show that an 8-context SMT with an FSR register file (i.e., support for
deallocating registers in idle contexts) needs only 96 additional registers to alleviate physical
register pressure, lowering the renaming register cost to 27% of the ISA-defined registers.
Compiler-directed register deallocation for active contexts drops the overhead even further, to
only 8 registers or 3% of the architectural register state.
The Free Register and Free Mask results highlight the trade-off between these two
alternative schemes. Free Register is more effective at reducing the number of dead registers,
because it deallocates them more promptly, at their last uses. When registers are a severe
bottleneck, as in applu, hydro2d, tomcatv, and radix with small register files, Free Register
outperforms Free Register Mask. Free Register Mask, on the other hand, incurs less instruction
overhead; therefore it is preferable with larger register files and applications with low register
usage.
Free Opcode and its variant, Free Opcode/Mask, 1 are the schemes of choice. They strike a
balance between Free Register and Free Mask by promptly deallocating registers, while
avoiding instruction overhead. When registers were at a premium, Free Opcode(/Mask)
achieved or exceeded the performance of Free Register; with the larger register file and for
applications with low register usage, Free Mask performance was attained or surpassed.
For most programs (all register set sizes and both cache hierarchies) Free Opcode(/Mask)
met or came close to the optimal performance of Free Register Bit. (For example, it was within
Integer FP
Opcode Operand Opcode Operand
Table
The opcodes used in Free Opcode. Note that for mult, stt, and fcmov, two new versions of each must be
added. The versions specify whether the first, second, or both operands are last uses.
4% on average for 264 registers, and 10% for 352, on the small cache hierarchy.) With further
tuning of opcode selection and the use of other hybrid schemes (perhaps judiciously combining
Free Opcode, Free Mask, and Free Register), we expect that the gap between it and Free
Register Bit will be narrowed even further, and that we will achieve the upper bound of
1. We profiled a very small sample of programs to determine the best selection of opcodes for Free Opcode,
and used Free Opcode/Mask to provide more flexibility in opcode choice. The speedups of the two
schemes are very close, and which has the performance edge varies across the applications for 264 regis-
ters. Looking at a different or larger set of programs to determine the hot opcodes might tip the performance
balance for these cases. (For example, by adding 6 single-precision floating point Free Opcodes to
the single-precision swim, Free Opcode exceeded both Free Register and Free Mask.) Therefore we discuss
the results for Free Opcode and Free Opcode/Mask together.
Free Register Bit
Free Register
Free Register Mask
Free Register Opcode
Free Register Opcode/Mask
applu hydro2d swim tomcatv fft LU radix
water-nsquared mean applu hydro2d swim tomcatv fft LU radix
water-nsquared mean1.03.0
applu hydro2d swim tomcatv fft LU radix
water-nsquared mean applu hydro2d swim tomcatv fft LU radix
water-nsquared mean1.03.0
small cache
small cache
Figure
A comparison of register deallocation alternatives. Each bar is the speedup over no deallocation with 8 threads.
compiler-directed register deallocation performance.
In summary, by providing the hardware with explicit information about register lifetimes,
compiler-directed register deallocation can significantly improve performance on small SMT
register files, so that they become a viable alternative even with register-intensive applications.
Although particularly well-suited for SMT, register deallocation should benefit any out-of-order
processor with explicit register renaming.
5 Related work
Several researchers have investigated register file issues similar to those discussed in this
paper. Large register files are a concern for both multithreaded architectures and processors with
register windows. Waldspurger and Weihl [25] proposed compiler and runtime support for
managing multiple register sets in a register file. The compiler tries to identify an optimal
number of registers for each thread, and generates code using that number. The runtime system
then tries to dynamically pack the register sets from all active threads into the register file. Nuth
and Dally's [17] named state register file caches register values by dynamically mapping active
registers to a small, fast set of registers, while backing the full register name space in memory.
To reduce the required chip area in processors with register windows, Sun designed 3-D
register files [22]. Because only one register window can be active at any time, the density of the
register file can be increased by overlaying multiple register cells so that they share wires.
Several papers have investigated register lifetimes and other register issues. Farkas, et al. [6]regsregsregsregs50.0swimregsregsregsregs100.0hydro2dregsregsregsregs50.0150.0
Execution
cycles
(millions)
applu
Figure
7: Comparison of execution time for FSR with and without Free Register Bit for a range of register file sizes, on bothregsregsregsregs50.0tomcatv
the larger and smaller cache sizes. The height of the solid black bar represents the execution time when Free Register Bitregsregsregsregs100.0300.0
hydro2d smallregsregsregsregs50.0swim smallregsregsregsregs50.0150.0
small
Free Register Bit
base
is used. The total height of the bar corresponds to the execution time when no deallocation is performed. The relativelyregsregsregsregs50.0150.0
Execution
cycles
(millions)
small
flat height of the black bars indicates that with Free Register Bit, smaller register files can achieve the performance of
larger register files.
compared the register file requirements for precise and imprecise interrupts and their effects on
the number of registers needed to support parallelism in an out-of-order processor. They also
characterized the lifetime of register values, by identifying the number of live register values
present in various stages of the renaming process.
Franklin and Sohi [7] and Lozano and Gao [15] found that register values have short
lifetimes, and often do not need to be committed to the register file. Both proposed compiler
support to identify last uses and architectural mechanisms to allow the hardware to ignore writes
to reduce register file traffic and the number of write ports, but neither applied these concepts to
register deallocation. Pleszkun and Sohi [18] proposed a mechanism for exposing the reorder
buffer to the compiler, so that it could generate better schedules and provide speculative
execution. Sprangle and Patt [21] proposed a statically-defined tag ISA that exposes register
renaming to the compiler and relies on basic blocks as the atomic units of work. Part of the
register file is used for storing basic block effects, and the rest handles values that are live across
basic block boundaries.
Janssen and Corporaal [10], Capitanio, et al. [3], Llosa, et al. [12], Multiflow [4], and
Kiyohara, et al. [11] also investigated techniques for handling large register files, including
partitioning, limited connectivity, replication, and the use of new opcodes to address an
extended register file.
6 Conclusions
Simultaneous multithreading has the potential to significantly increase processor utilization
on wide-issue out-of-order processors, by permitting multiple threads to issue instructions to the
processor's functional units within a single cycle. As a consequence, SMT requires a large
register file to support the multiple thread contexts. This raises a difficult design tradeoff,
because large register files can consume die area and impact performance.
This paper has introduced new software-directed techniques that increase utilization of the
registers in an SMT. Fundamental to these techniques is the global sharing of registers among
threads, both for architectural register and renaming register needs. By introducing new
instructions or additional fields in the ISA, we allow the operating system and compiler to signal
physical register deallocation to the processor, thereby greatly decreasing register waste. The
result is more effective register use, permitting either a reduction in register file size or an
increase in performance for a given file size.
We have introduced explicit software-directed deallocation in two situations. First, when a
context becomes idle, the operating system can indicate that the idle context's physical registers
can be deallocated. This permits those registers to be freed in order to serve the renaming needs
of other executing threads. Our results show that such notification can significantly boost
performance for the remaining threads, e.g., a register file with 264 registers demonstrates
performance equivalent to a 352-register file when only 4 threads are running. Second, by
allowing the compiler to signal the last use of a register, the processor need not wait for a
redefinition of that register in order to reuse it. We proposed several mechanisms for signalling
last register use, and showed that on small register files, average speedups of 1.6 can be obtained
by using the most efficient of these mechanisms. While our results are shown in the context of
an SMT processor, these mechanisms would be appropriate for any processor using register
renaming for out-of-order instruction issue.
--R
The Tera computer sys- tem
Portable Programs for Parallel Processors.
Partitioned register files for VLIWs: A preliminary analysis of tradeoffs.
A VLIW architecture for a trace scheduling compiler.
Simultaneous multithreading: A platform for next-generation processors
Register file design considerations in dynamically scheduled proces- sors
Register traffic analysis for streamlining inter-operation communication in fine-grained parallel processors
Digital 21264 sets new standard.
Maximizing multiprocessor performance with the SUIF compiler.
Partitioned register files for TTAs.
Register connection: A new approach to adding registers into instruction set architectures.
Converting thread-level parallelism to instruction-level parallelism via simultaneous multithreading
Exploiting short-lived variables in superscalar processors
Combining branch predictors.
The named-state register file: Implementation and performance
The performance potential of multiple functional unit processors.
Scaling parallel programs for multiprocessors: Methodology and examples.
Standard Performance Evaluation Council.
Facilitating superscalar processing via a combined static/dynamic register renaming scheme.
A three dimensional register file for superscalar processors.
Exploiting choice: Instruction fetch and issue on an implementable simultaneous multithreading processor.
Simultaneous multithreading: Maximizing on-chip parallelism
Register relocation: Flexible contexts for multithreading.
The SPLASH-2 programs: Characterization and methodological considerations
The MIPS R10000 superscalar microprocessor.
--TR
--CTR
Hua Yang , Gang Cui , Hong-Wei Liu , Xiao-Zong Yang, Compacting register file via 2-level renaming and bit-partitioning, Microprocessors & Microsystems, v.31 n.3, p.178-187, May, 2007
Matthew Postiff , David Greene , Steven Raasch , Trevor Mudge, Integrating superscalar processor components to implement register caching, Proceedings of the 15th international conference on Supercomputing, p.348-357, June 2001, Sorrento, Italy
Dean M. Tullsen , John S. Seng, Storageless value prediction using prior register values, ACM SIGARCH Computer Architecture News, v.27 n.2, p.270-279, May 1999
Jos F. Martnez , Jose Renau , Michael C. Huang , Milos Prvulovic , Josep Torrellas, Cherry: checkpointed early resource recycling in out-of-order microprocessors, Proceedings of the 35th annual ACM/IEEE international symposium on Microarchitecture, November 18-22, 2002, Istanbul, Turkey
David W. Oehmke , Nathan L. Binkert , Trevor Mudge , Steven K. Reinhardt, How to Fake 1000 Registers, Proceedings of the 38th annual IEEE/ACM International Symposium on Microarchitecture, p.7-18, November 12-16, 2005, Barcelona, Spain
Mikko H. Lipasti , Brian R. Mestan , Erika Gunadi, Physical Register Inlining, ACM SIGARCH Computer Architecture News, v.32 n.2, p.325, March 2004
Chulho Shin , Seong-Won Lee , Jean-Luc Gaudiot, Adaptive dynamic thread scheduling for simultaneous multithreaded architectures with a detector thread, Journal of Parallel and Distributed Computing, v.66 n.10, p.1304-1321, October 2006
Eric Tune , Rakesh Kumar , Dean M. Tullsen , Brad Calder, Balanced Multithreading: Increasing Throughput via a Low Cost Multithreading Hierarchy, Proceedings of the 37th annual IEEE/ACM International Symposium on Microarchitecture, p.183-194, December 04-08, 2004, Portland, Oregon
James Burns , Jean-Luc Gaudiot, SMT Layout Overhead and Scalability, IEEE Transactions on Parallel and Distributed Systems, v.13 n.2, p.142-155, February 2002
Monreal , Victor Vinals , Jose Gonzalez , Antonio Gonzalez , Mateo Valero, Late Allocation and Early Release of Physical Registers, IEEE Transactions on Computers, v.53 n.10, p.1244-1259, October 2004
Joshua A. Redstone , Susan J. Eggers , Henry M. Levy, An analysis of operating system behavior on a simultaneous multithreaded architecture, ACM SIGPLAN Notices, v.35 n.11, p.245-256, Nov. 2000
Joshua A. Redstone , Susan J. Eggers , Henry M. Levy, An analysis of operating system behavior on a simultaneous multithreaded architecture, ACM SIGARCH Computer Architecture News, v.28 n.5, p.245-256, Dec. 2000 | simultaneous multithreading;multithreaded architecture;architecture;register file |
322530 | Inductive analysis of the Internet protocol TLS. | Internet browsers use security protocols to protect sensitive messages. An inductive analysis of TLS (a descendant of SSL 3.0) has been performed using the theorem prover Isabelle. Proofs are based on higher-order logic and make no assumptions concerning beliefs of finiteness. All the obvious security goals can be proved; session resumption appears to be secure even if old session keys are compromised. The proofs suggest minor changes to simplify the analysis. TLS, even at an abstract level, is much more complicated than most protocols verified by researchers. Session keys are negotiated rather than distributed, and the protocol has many optional parts. Netherless, the resources needed to verify TLS are modest: six man-weeks of effort and three minutes of processor time. | INTRODUCTION
Internet commerce requires secure communications. To order goods, a customer
typically sends credit card details. To order life insurance, the customer might
have to supply condential personal data. Internet users would like to know that
such information is safe from eavesdropping or alteration.
Many Web browsers protect transmissions using the protocol SSL (Secure Sockets
Layer). The client and server machines exchange nonces and compute session keys
from them. Version 3.0 of SSL has been designed to correct a
aw of previous
versions, the cipher-suite rollback attack, whereby an intruder could get the parties
to adopt a weak cryptosystem [Wagner and Schneier 1996]. The latest version of
the protocol is called TLS (Transport Layer Security) [Dierks and Allen 1999]; it
closely resembles SSL 3.0.
Is TLS really secure? My proofs suggest that it is, but one should draw no
The research was funded by the U.K.'s Engineering and Physical Sciences Research Council, grants
'Authentication Logics' and GR/K57381 `Mechanizing Temporal Reasoning.'
Address: Computer Laboratory, University of Cambridge, Cambridge CB2 3QG, England
email lcp@cl.cam.ac.uk
Permission to make digital or hard copies of part or all of this work for personal or classroom use is
granted without fee provided that copies are not made or distributed for prot or direct commercial
advantage and that copies show this notice on the rst page or initial screen of a display along
with the full citation. Copyrights for components of this work owned by others than ACM must
be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on
servers, to redistribute to lists, or to use any component of this work in other works, requires prior
specic permission and/or a fee. Permissions may be requested from Publications Dept, ACM
Inc., 1515 Broadway, New York, NY 10036 USA, fax +1 (212) 869-0481, or permissions@acm.org.
conclusions without reading the rest of this paper, which describes how the protocol
was modelled and what properties were proved. I have analyzed a much simplied
form of TLS; I assume hashing and encryption to be secure.
My abstract version of TLS is simpler than the concrete protocol, but it is still
more complex than the protocols typically veried. We have not reached the limit
of what can be analyzed formally.
The proofs were conducted using Isabelle/HOL [Paulson 1994], an interactive theorem
prover for higher-order logic. They use the inductive method [Paulson 1998],
which has a simple semantics and treats innite-state systems. Model-checking is
not used, so there are no restrictions on the agent population, numbers of concurrent
runs, etc.
The paper gives an overview of TLS (x2) and of the inductive method for verifying
protocols (x3). It continues by presenting the Isabelle formalization of TLS (x4) and
outlining some of the properties proved (x5). Finally, the paper discusses related
work (x6) and concludes (x7).
2.
OVERVIEW
OF TLS
A TLS handshake involves a client, such as a World Wide Web browser, and a
Web server. Below, I refer to the client as A ('Alice') and the server as B (`Bob'),
as is customary for authentication protocols, especially since C and S often have
dedicated meanings in the literature.
At the start of a handshake, A contacts B, supplying a session identier and
nonce. In response, B sends another nonce and his public-key certicate (my model
omits other possibilities). Then A generates a pre-master-secret, a 48-byte random
string, and sends it to B encrypted with his public key. A optionally sends a signed
message to authenticate herself. Now, both parties calculate the master-secret M
from the nonces and the pre-master-secret, using a secure pseudo-random-number
function (PRF). They calculate session keys from the nonces and master-secret.
Each session involves a pair of symmetric keys; A encrypts using one and B encrypts
using the other. Before sending application data, both parties exchange nished
messages to conrm all details of the handshake and to check that cleartext parts
of messages have not been altered.
A full handshake is not always necessary. At some later time, A can resume a
session by quoting an old session identier along with a fresh nonce. If B is willing
to resume the designated session, then he replies with a fresh nonce. Both parties
compute fresh session keys from these nonces and the stored master-secret, M .
Both sides conrm this shorter run using nished messages.
TLS is highly complex. My version leaves out many details for the sake of
simplicity:
|Record formats, eld widths, cryptographic algorithms, etc. are irrelevant in an
abstract analysis.
|Alert and failure messages are unnecessary because bad sessions can simply be
abandoned.
|The server key exchange message allows anonymous sessions among other
things, but it is not an essential part of the protocol.
Inductive Analysis of the Internet Protocol TLS 3
A,Na,Sid,Pa
client server
client hello
server hello
cert(B,Kb)
server certificate
cert(A,Ka)
client certificate
client key exchange
certificate verify
{Finished} clientK(Na,Nb,M)
client finished
Finished
server finished
Fig. 1. The TLS Handshake Protocol as Modelled
4 L. C. Paulson
Here are the handshake messages in detail, as I model them, along with comments
about their relation to full TLS. Section numbers, such as tlsx7.3, refer to the TLS
specication [Dierks and Allen 1999]. In Fig. 1, dashed lines indicate optional parts.
client hello A
The items in this message include the nonce Na , called client random, and the
session identier Sid. Item Pa is A's set of preferences for encryption and com-
pression; due to export controls, for example, some clients cannot support certain
encryption methods. For our purposes, all that matters is that both parties can
detect if Pa has been altered during transmission (tlsx7.4.1.2).
server
Agent B, in his turn, replies with his nonce Nb (server random). He repeats the
session identier and returns as P b his cryptographic preferences, selected from Pa.
server certicate
The server's public key, Kb, is delivered in a certicate signed by a trusted third
party. (The TLS proposal (tlsx7.4.2) says it is 'generally an X.509v3 certicate.' I
assume a single certication authority and omit lifetimes and similar details.) Making
the certicate mandatory and eliminating the server key exchange message
simplies server hello. I leave certicate request (tlsx7.4.4) implicit:
A herself decides whether or not to send the optional messages client certicate
and certicate verify.
client certicate* A
client key exchange A
certicate verify* A
For simplicity, I do not model the possibility of arriving at the pre-master-secret via
a Di-e-Hellman exchange (tlsx7.4.7.2). Optional messages are starred (*) above;
in certicate verify, A authenticates herself to B by signing the hash of some
items relevant to the current session. The specication states that all handshake
messages should be hashed, but my proofs suggest that only Nb, B and PMS are
essential.
client nished A
server nished A
Both parties compute the master-secret M from PMS , Na and Nb and compute
Finished as the hash of Sid, M , Na , Pa, A, Nb, P b, B. According to the spec-
ication (tlsx7.4.9), M should be hashed with all previous handshake messages
using PRF. My formalization hashes message components rather than messages in
order to simplify the inductive denition. It is vulnerable to an attack in which
the spy intercepts certicate verify, downgrading the session so that the client
appears to be unauthenticated.
The symmetric key clientK(Na ; Nb; M ) is intended for client encryption, while
each party decrypts using the other's
Inductive Analysis of the Internet Protocol TLS 5
(tlsx6.3). The corresponding MAC secrets are implicit because my model assumes
strong encryption.
Once a party has received the other's nished message and compared it with
her own, she is assured that both sides agree on all critical parameters, including
M and the preferences Pa and P b. Now she may begin sending condential data.
The SSL specication [Freier et al. 1996] erroneously states that she can send data
immediately after sending her own nished message, before conrming these pa-
rameters; there she takes a needless risk, since an attacker may have changed the
preferences to request weak encryption. This is the cipher-suite rollback attack,
precisely the one that the nished messages are intended to prevent.
For session resumption, the hello messages are the same. After checking that
the session identier is recent enough, the parties exchange nished messages and
start sending application data. On paper, then, session resumption does not involve
any new message types. But in the model, four further events are involved. Each
party stores the session parameters after a successful handshake and looks them up
when resuming a session.
3. PROVING PROTOCOLS USING ISABELLE
Isabelle [Paulson 1994] is an interactive theorem prover supporting several for-
malisms, one of which is higher-order logic (HOL). Protocols can be modelled in
Isabelle/HOL as inductive denitions. Isabelle's simplier and classical reasoner
automate large parts of the proofs. A security protocol is modelled as the set of
traces that could arise when a population of agents run it. Among the agents is a
spy who controls some subset of them as well as the network itself. The population
is innite, and the number of interleaved sessions is unlimited. This section
summarizes the approach, described in detail elsewhere [Paulson 1998].
3.1 Messages
Messages are composed of agent names, nonces, keys, etc.:
Agent A identity of an agent
Number N guessable number
non-guessable number
KeyK cryptographic key
HashX hash of message X
Crypt KX encryption of X with key K
concatenation of messages
Attributes such as non-guessable concern the spy. The protocol's client random
and server random are modelled using Nonce because they are 28-byte random
values, while session identiers are modelled using Number because they may
be any strings. TLS sends these items in clear, so whether they are guessable or
not makes little dierence to what can be proved. The pre-master-secret must be
modelled as a nonce; we shall prove no security properties by assuming it can be
guessed.
The model assumes strong encryption. Hashing is collision-free, and nobody
can recover a message from its hash. Encrypted messages can neither be read
nor changed without using the corresponding key. The protocol verier makes
6 L. C. Paulson
such assumptions not because they are true but because making them true is the
responsibility of the cryptographer. Moreover, reasoning about a cryptosystem
such as DES down to the bit level is infeasible. However, this is a weakness of
the method: certain combinations of protocols and encryption methods can be
vulnerable [Ryan and Schneider 1998].
Three operators are used to express security properties. Each maps a set H of
messages to another such set.
|parts H is the set of message components potentially recoverable from H (assum-
ing all ciphers could be broken).
|analzH is the set of message components recoverable from H by means of decryption
using keys available (recursively) in analz H .
|synth H is the set of messages that could be expressed, starting from H and
guessable items, using hashing, encryption and concatenation.
3.2 Traces
A trace is a list of events such as Says ABX , meaning 'A sends message X to B,' or
Notes AX , meaning 'A stores X internally.' Each trace is built in reverse order by
prexing ('consing') events to the front of the list, where # is the `cons' operator.
The set bad comprises those agents who are under the spy's control.
The function spies yields the set of messages the spy can see in a trace: all
messages sent across the network and the internal notes and private keys of the bad
agents.
spies ((Says ABX)#
spies ((Notes AX) #
spies evs otherwise
The set used evs includes the parts of all messages in the trace, whether they are
visible to other agents or not. Now Na 62 used evs expresses that Na is fresh with
respect to the trace evs.
used ((Says ABX)# used evs
used ((Notes AX)# used evs
4. FORMALIZING THE PROTOCOL IN ISABELLE
With the inductive method, each protocol step is translated into a rule of an inductive
denition. A rule's premises describe the conditions under which the rule
may apply, while its conclusion adds new events to the trace. Each rule allows a
protocol step to occur but does not force it to occur|just as real world machines
crash and messages get intercepted. The inductive denition has further rules to
model intruder actions, etc.
For TLS, the inductive denition comprises fteen rules, compared with the usual
six or seven for simpler protocols. The computational cost of proving theorems
seems to be only linear in the number of rules, but it can be exponential in the
complexity of a rule, for example if there is multiple encryption. Combining rules
in order to reduce their number is therefore counterproductive.
Inductive Analysis of the Internet Protocol TLS 7
4.1 Basic Constants
TLS uses both public-key and shared-key encryption. Each agent A has a private
priK A and a public key pubKA. The operators clientK and serverK create
symmetric keys from a triple of nonces. Modelling the underlying pseudo-random-
number generator causes some complications compared with the treatment of simple
public-key protocols such as Needham-Schroeder [Paulson 1998].
The common properties of clientK and serverK are captured in the function
sessionK, which is assumed to be an injective (collision-free) source of session keys.
In an Isabelle theory le, functions are declared as constants that have a function
type. Axioms about them can be given using a rules section.
datatype
consts
sessionK :: "(nat*nat*nat) * role => key"
clientK, serverK :: "nat*nat*nat => key"
rules
inj_sessionK "inj sessionK"
"isSymKey (sessionK nonces)"
The enumeration type, role, indicates the use of the session key. We ensure that
clientK and serverK have disjoint ranges (no collisions between the two) by dening
We must also declare the pseudo-random function PRF. In the real protocol, PRF
has an elaborate denition in terms of the hash functions MD5 and SHA-1 (see
tlsx5). At the abstract level, we simply assume PRF to be injective.
consts
PRF :: "nat*nat*nat => nat"
tls :: "event list set"
rules
inj_PRF "inj PRF"
We have also declared the constant tls to be the set of possible traces in a system
running the protocol. The inductive denition of tls species it to be the least set
of traces that is closed under the rules supplied below. A trace belongs to tls only
if it can be generated by nitely many applications of the rules. Induction over tls
amounts to considering every possible way that a trace could have been extended.
4.2 The Spy
Figure
2 presents the rst three rules, two of which are standard. Rule Nil allows
the empty trace. Rule Fake says that the spy may invent messages using past tra-c
and send them to any other agent. A third rule, SpyKeys, augments Fake by letting
the spy use the TLS-specic functions sessionK and PRF. In conjunction with the
spy's other powers, it allows him to apply sessionK and PRF to any three nonces
previously available to him. It does not let him invert these functions, which we
assume to be one-way. We could replace SpyKeys by dening a TLS version of the
function synth; however, we should then have to rework the underlying theory of
messages, which is common to all protocols.
8 L. C. Paulson
Nil
Fake
{|Nonce NA, Nonce NB, Nonce M|} analz (spies evsSK) |]
=) Notes Spy {| Nonce (PRF(M,NA,NB)),
Key
Fig. 2. Specifying TLS: Basic Rules
4.3 Hello Messages
Figure
3 presents three rules for the hello messages. Client hello lets any agent A
send the nonce Na, session identier Sid and preferences Pa to any other agent, B.
Server hello is modelled similarly. Its precondition is that B has received a
suitable instance of Client hello.
used evsCH; NA 62 range PRF |]
=) Says A B {|Agent A, Nonce NA, Number SID, Number PA|}
used evsSH; NB 62 range PRF;
Says A
Nonce NA, Number SID, Number PA|}
set evsSH |]
{|Nonce NB, Number SID, Number PB|}
Certicate
Fig. 3. Specifying TLS: Hello Messages
In Client hello, the assumptions Na 62 used evsCH and Na 62 range PRF state
that Na is fresh and distinct from all possible master-secrets. The latter assumption
precludes the possibility that A might choose a nonce identical to some master-
secret. (The standard function used does not cope with master-secrets because
they never appear in tra-c.) Both assumptions are reasonable because a 28-byte
random string is highly unlikely to clash with any existing nonce or future master-
secret. Still, the condition seems stronger than necessary. It refers to all conceivable
master-secrets because there is no way of referring to one single future. As an
alternative, a 'no coincidences' condition might be imposed later in the protocol,
but the form it should take is not obvious; if it is wrong, it might exclude realistic
attacks.
Inductive Analysis of the Internet Protocol TLS 9
The Certicate rule handles both server certicate and client certicate. It
is more liberal than real TLS, for any agent may send his public-key certicate to
any other agent. A certicate is represented by an (agent, key) pair signed by the
authentication server. Freshness of certicates and other details are not modelled.
constdefs certificate :: "[agent,key] => msg"
"certificate A KA == Crypt(priK Server){|Agent A, Key KA|}"
4.4 Client Messages
The next two rules concern client key exchange and certicate verify (Fig. 4).
Rule ClientKeyExch chooses a PMS that is fresh and diers from all master-secrets,
like the nonces in the hello messages. It requires server certicate to have been
received. No agent is allowed to know the true sender of a message, so ClientKey-
Exch might deliver the PMS to the wrong agent. Similarly, CertVerify might use
the Nb value from the wrong instance of server hello. Security is not compromised
because the run will fail in the nished messages.
ClientKeyExch
used evsCX; PMS 62 range PRF;
Notes A {|Agent B, Nonce PMS|}
CertVerify
{|Nonce NB, Number SID, Number PB|}
Notes A {|Agent B, Nonce PMS|} 2 set evsCV |]
(Hash{|Nonce NB, Agent B, Nonce PMS|}))
Fig. 4. Client key exchange and certicate verify
ClientKeyExch not only sends the encrypted PMS to B but also stores it internally
using the event Notes A fjB; PMS j g. Other rules model A's referring to this
note. For instance, CertVerify states that if A chose PMS for B and has received
a server hello message, then she may send certicate verify.
In my initial work on TLS, I modelled A's knowledge by referring to the event
of her sending fjPMS j
g Kb to B. However, this approach did not correctly model
the sender's knowledge: the spy can intercept and send the ciphertext fjPMS j g Kb
without knowing PMS . (The approach does work for shared-key encryption. A
ciphertext such as fjPMS j
Kab identies the agents who know the plaintext, namely
A and B.) I discovered this anomaly when a proof failed. The nal proof state
indicated that the spy could gain the ability to send client nished merely by
replaying A's message fjPMS j
Kb .
Anomalies like this one can creep into any formalization. The worst are those
that make a theorem hold vacuously, for example by mis-stating a precondition.
There is no remedy but constant vigilance, noticing when a result is too good to
be true or is proved too easily. We must also check that the assumptions built into
the model, such as strong encryption, reasonably match the protocol's operating
environment.
4.5 Finished Messages
Next come the nished messages (Fig. 5). ClientFinished states that if A has sent
client hello and has received a plausible instance of server hello and has chosen
a PMS for B, then she can calculate the master-secret and send a nished message
using her client write key. ServerFinished is analogous and may occur if B has
received a client hello, sent a server hello, and received a client key exchange
message.
ClientFinished
Nonce NA, Number SID, Number PA|}
{|Nonce NB, Number SID, Number PB|}
Notes A {|Agent B, Nonce PMS|} 2 set evsCF;
(Hash{|Number SID, Nonce M,
Nonce NA, Number PA, Agent A,
Nonce NB, Number PB, Agent B|}))
ServerFinished
Nonce NA, Number SID, Number PA|}
{|Nonce NB, Number SID, Number PB|}
Says A 00 B (Crypt (pubK B) (Nonce PMS)) 2 set evsSF;
(Hash{|Number SID, Nonce M,
Nonce NA, Number PA, Agent A,
Nonce NB, Number PB, Agent B|}))
Fig. 5. Finished messages
4.6 Session Resumption
That covers all the protocol messages, but the specication is not complete. Next
come two rules to model agents' conrmation of a session (Fig. 6). Each agent, after
sending its nished message and receiving a matching nished message apparently
from its peer, records the session parameters to allow resumption. Next come
two rules for session resumption (Fig. 7). Like ClientFinished and ServerFinished,
they refer to two previous hello messages. But instead of calculating the master-
secret from a PMS just sent, they use the master-secret stored by ClientAccepts
or ServerAccepts with the same session identier. They calculate new session keys
using the fresh nonces.
The references to PMS in the Accepts rules appear to contradict the protocol
specication (tlsx8.1): 'the pre-master-secret should be deleted from memory once
Inductive Analysis of the Internet Protocol TLS 11
ClientAccepts
Notes A {|Agent B, Nonce PMS|} 2 set evsCA;
Hash{|Number SID, Nonce M,
Nonce NA, Number PA, Agent A,
Nonce NB, Number PB, Agent B|};
Notes A {|Number SID, Agent A, Agent B, Nonce M|} # evsCA 2 tls
ServerAccepts
Says A
Hash{|Number SID, Nonce M,
Nonce NA, Number PA, Agent A,
Nonce NB, Number PB, Agent B|};
{|Number SID, Agent A, Agent B, Nonce M|} # evsSA 2 tls
Fig. 6. Agent acceptance events
the master-secret has been computed.' The purpose of those references is to restrict
the rules to agents who actually know the secrets, as opposed to a spy who merely
has replayed messages (recall the comment at the end of x4.4). They can probably
be replaced by references to the master-secret, which the agents keep in memory.
We would have to add further events to the inductive denition. Complicating the
model in this way brings no benets: the loss of either secret is equally catastrophic.
Four further rules (omitted here) model agents' conrmation of a session and a
subsequent session resumption.
4.7 Security Breaches
The nal rule, Oops, models security breaches. Any session key, if used, may end
up in the hands of the spy. Session resumption turns out to be safe even if the spy
has obtained session keys from earlier sessions.
Oops
Other security breaches could be modelled. The pre-master-secret might be lost
to a cryptanalytic attack against the client key exchange message, and Wagner
and Schneier [1996, x4.7] suggest a strategy for discovering the master-secret. Loss
of the PMS would compromise the entire session; it is hard to see what security
goal could still be proved (in contrast, loss of a session key compromises that key
alone). Recall that the spy already controls the network and an unknown number
of agents.
The protocol, as modelled, is too liberal and is highly nondeterministic. As in
TLS itself, some messages are optional (client certicate, certicate verify).
ClientResume
Nonce NA, Number SID, Number PA|}
{|Nonce NB, Number SID, Number PB|}
Notes A {|Number SID, Agent A, Agent B, Nonce M|} 2 set evsCR |]
(Hash{|Number SID, Nonce M,
Nonce NA, Number PA, Agent A,
Nonce NB, Number PB, Agent B|}))
ServerResume
Says A
Nonce NA, Number SID, Number PA|}
{|Nonce NB, Number SID, Number PB|}
Notes B {|Number SID, Agent A, Agent B, Nonce M|} 2 set evsSR |]
(Hash{|Number SID, Nonce M,
Nonce NA, Number PA, Agent A,
Nonce NB, Number PB, Agent B|})) # evsSR
Fig. 7. Agent resumption events
Either client or server may be the rst to commit to a session or to send a nished
message. One party might attempt session resumption while the other runs the full
protocol. Nothing in the rules above stops anyone from responding to any message
repeatedly. Anybody can send a certicate to anyone else at any time.
Such nondeterminism is unacceptable in a real protocol, but it simplies the
model. Constraining a rule to follow some other rule or to apply at most once
requires additional preconditions. A simpler model generally allows simpler proofs.
Safety theorems proved under a permissive regime will continue to hold under a
strict one.
5. PROPERTIES PROVED OF TLS
One di-culty in protocol verication is knowing what to prove. Protocol goals are
usually stated informally. The TLS memo states 'three basic properties'
(1) 'The peer's identity can be authenticated using . public key cryptography'
(2) 'The negotiated secret is unavailable to eavesdroppers, and for any authenticated
connection the secret cannot be obtained, even by an attacker who can
place himself in the middle of the connection'
(3) 'no attacker can modify the negotiation communication without being detected
by the parties'
Authentication can mean many things [Gollmann 1996]; it is a pity that the
memo does not go into more detail. I have taken 'authenticated connection' to
mean one in which both parties use their private keys. My model allows A to
be unauthenticated, since certicate verify is optional. However, B must be
authenticated: the model does not support Di-e-Hellman, so Kb 1 must be used
to decrypt client key exchange. Against an active intruder, an unauthenticated
Inductive Analysis of the Internet Protocol TLS 13
connection is vulnerable to the usual man-in-the-middle attack. Since the model
does not support unauthenticated connections, I cannot investigate whether they
are secure against passive eavesdroppers.
Some of the results discussed below relate to authentication. A pair of honest
agents can establish the master-secret securely and use it to generate uncompromised
session keys. Session resumption is secure even if previous session keys from
that session have been compromised.
5.1 Basic Lemmas
In the inductive method, results are of three sorts: possibility properties, regularity
lemmas and secrecy theorems. Possibility properties merely exercise all the rules
to check that the model protocol can run. For a simple protocol, one possibility
property su-ces to show that message formats are compatible. For TLS, I proved
four properties to check various paths through the main protocol, the client verify
message, and session resumption.
Regularity lemmas assert properties that hold of all tra-c. For example, no
protocol step compromises a private key. From our specication of TLS, it is easy
to prove that all certicates are valid. (This property is overly strong, but adding
false certicates seems pointless: B might be under the spy's control anyway.) If
certicate(B; K) appears in tra-c, then K really is B's public key:
The set parts(spies evs) includes the components of all messages that have been sent;
in the inductive method, regularity lemmas often mention this set. Sometimes the
lemmas merely say that events of a particular form never occur.
Many regularity lemmas are technical. Here are two typical ones. If a master-
secret has appeared in tra-c, then so has the underlying pre-master-secret. Only
the spy might send such a message.
[| Nonce (PRF (PMS,NA,NB)) 2 parts(spies evs); evs 2 tls |]
=) Nonce PMS 2 parts(spies evs)
If a pre-master-secret is fresh, then no session key derived from it can either have
been transmitted or used to encrypt. 1
[| Nonce PMS 62 parts(spies evs);
=) Key K 62 parts(spies evs) &
Client authentication, one of the protocol's goals, is easily proved. If certicate
verify has been sent, apparently by A, then it really has been sent by A provided
A is uncompromised (not controlled by the spy). Moreover, A has chosen the
pre-master-secret that is hashed in certicate verify.
Agent B, pms|});
certificate A KA 2 parts(spies evs);
1 The two properties must be proved in mutual induction because of interactions between the Fake
and Oops rules.
14 L. C. Paulson
5.2 Secrecy Goals
Other goals of the protocol relate to secrecy: certain items are available to some
agents but not to others. They are usually the hardest properties to establish.
With the inductive method, they seem always to require, as a lemma, some form
of session key compromise theorem. This theorem imposes limits on the message
components that can become compromised by the loss of a session key. Typically
we require that these components contain no session keys, but for TLS, they must
contain no nonces. Nonces are of critical importance because one of them is the
pre-master-secret.
The theorem seems obvious. No honest agent encrypts nonces using session keys,
and the spy can only send nonces that have already been compromised. However,
its proof takes over seven seconds to run. Like other secrecy proofs, it involves a
large, though automatic, case analysis.
Nonce N 2 analz (insert (Key (sessionK z)) (spies
(Nonce N 2 analz (spies evs))
Note that insert xA denotes fxg [ A. The set analz(spies evs) includes all message
components available to the spy, and likewise analz(fKg [ spies evs) includes all
message components that the spy could get with the help of key K. The theorem
states that session keys do not help the spy to learn new nonces.
Other secrecy proofs follow easily from the session key compromise theorem, using
induction and simplication. Provided A and B are honest, the client's session key
will be secure unless A herself gives it to the spy, using Oops.
[| Notes A {|Agent B, Nonce PMS|} 2 set evs;
Says A Spy (Key (clientK(NA,NB,PRF(PMS,NA,NB)))) 62 set evs;
A 62 bad; B 62 bad; evs 2 tls |]
=) Key (clientK(NA,NB,PRF(PMS,NA,NB))) 62 parts(spies evs)
An analogous theorem holds for the server's session key. However, the server cannot
check the Notes assumption; see x5.3.2.
[| Notes A {|Agent B, Nonce PMS|} 2 set evs;
A 62 bad; B 62 bad; evs 2 tls |]
=) Key (serverK(NA,NB,PRF(PMS,NA,NB))) 62 parts(spies evs)
If A sends the client key exchange message to B, and both agents are uncom-
promised, then the pre-master-secret and master-secret will stay secret.
[| Notes A {|Agent B, Nonce PMS|} 2 set evs;
Nonce PMS 62 analz(spies evs)
[| Notes A {|Agent B, Nonce PMS|} 2 set evs;
=) Nonce (PRF(PMS,NA,NB)) 62 analz(spies evs)
5.3 Finished Messages
Other important protocol goals concern authenticity of the nished message. If
each party can know that the nished message just received indeed came from
the expected agent, then they can compare the message components to conrm
Inductive Analysis of the Internet Protocol TLS 15
that no tampering has occurred. These components include the cryptographic
preferences, which an intruder might like to downgrade. Naturally, the guarantees
are conditional on both agents' being uncompromised.
5.3.1 Client's guarantee. The client's guarantee has several preconditions. The
client, A, has chosen a pre-master-secret PMS for B. The tra-c contains a nished
message encrypted with a server write key derived from PMS . The server, B,
has not given that session key to the spy (via Oops). The guarantee then states
that B himself has sent that message, and to A.
(Hash{|Number SID, Nonce M,
Nonce Na, Number PA, Agent A,
Nonce Nb, Number PB, Agent B|});
Notes A {|Agent B, Nonce PMS|} 2 set evs;
One of the preconditions may seem to be too liberal. The guarantee applies to any
occurrence of the nished message in tra-c, but it is needed only when A has
received that message. The form shown, expressed using parts(spies evs), streamlines
the proof; in particular, it copes with the spy's replaying a nished message
concatenated with other material. It is well known that proof by induction can
require generalizing the theorem statement.
5.3.2 Server's guarantee. The server's guarantee is slightly dierent. If any message
has been encrypted with a client write key derived from a given PMS|which
we assume to have come from A|and if A has not given that session key to the
spy, then A herself sent that message, and to B.
Notes A {|Agent B, Nonce PMS|} 2 set evs;
Says A Spy (Key(clientK(Na,Nb,M))) 62 set evs;
The assumption (involving Notes) that A chose the PMS is essential. If the client
has not authenticated herself, then B knows nothing about her true identity and
must trust that she is indeed A. By sending certicate verify, the client can
discharge the Notes assumption:
[| Crypt KA 1 (Hash{|nb, Agent B, Nonce PMS|}) 2 parts(spies evs);
certificate A KA 2 parts(spies evs);
Notes A {|Agent B, Nonce PMS|} 2 set evs
B's guarantee does not even require his inspecting the nished message. The very
use of clientK(Na,Nb,M) is proof that the communication is from A to B. If we
consider the analogous property for A, we nd that using serverK(Na,Nb,M) only
guarantees that the sender is B; in the absence of certicate verify, B has no
evidence that the PMS came from A. If he sends server nished to somebody
else then the session will fail, so there is no security breach.
Still, changing client key exchange to include A's identity,
would slightly strengthen the protocol and simplify the analysis. At present, the
proof scripts include theorems for A's association of PMS with B, and weaker theorems
for B's knowledge of PMS . With the suggested change, the weaker theorems
could probably be discarded.
The guarantees for nished messages apply to session resumption as well as to
full handshakes. The inductive proofs cover all the rules that make up the denition
of the constant tls, including those that model resumption.
5.4 Security Breaches
The Oops rule makes the model much more realistic. It allows session keys to be
lost to determine whether the protocol is robust: one security breach should not
lead to a cascade of others. Sometimes a theorem holds only if certain Oops events
are excluded, but Oops conditions should be weak. For the nished guarantees,
the conditions they impose on Oops events are as weak as could be hoped for: that
the very session key in question has not been lost by the only agent expected to
use that key for encryption.
6. RELATED WORK
Wagner and Schneier [1996] analyze SSL 3.0 in detail. Much of their discussion
concerns cryptanalytic attacks. Attempting repeated session resumptions causes
the hashing of large amounts of known plaintext with the master-secret, which
could lead to a way of revealing it (x4.7). They also report an attack against
the Di-e-Hellman key-exchange messages, which my model omits (x4.4). Another
attack involves deleting the change cipher spec message that (in a draft version of
may optionally be sent before the nished message. TLS makes change
cipher spec mandatory, and my model regards it as implicit in the nished
exchange.
Wagner and Schneier's analysis appears not to use any formal tools. Their form
of scrutiny, particularly concerning attacks against the underlying cryptosystems,
will remain an essential complement to proving protocols at the abstract level.
In his PhD thesis, Dietrich [1997] analyses SSL 3.0 using the belief logic NCP
(Non-monotonic Cryptographic Protocols). NCP allows beliefs to be deleted; in the
case of SSL, a session identier is forgotten if the session fails. (In my formalization,
session identiers are not recorded until the initial session reaches a successful
exchange of nished messages. Once recorded, they persist forever.) Recall that
SSL allows both authenticated and unauthenticated sessions; Dietrich considers the
latter and shows them to be secure against a passive eavesdropper. Although NCP
is a formal logic, Dietrich appears to have generated his lengthy derivations by
hand.
Mitchell, Shmatikov, and Stern [1997] apply model checking to a number of simple
protocols derived from SSL 3.0. Most of the protocols are badly
awed (no nonces,
for example) and the model checker nds many attacks. The nal protocol still
Inductive Analysis of the Internet Protocol TLS 17
omits much of the detail of TLS, such as the distinction between the pre-master-
secret and the other secrets computed from it. An eight-hour model-checking run
found no attacks against the protocol in a system comprising two clients and one
server.
7. CONCLUSIONS
The inductive method has many advantages. Its semantic framework, based on
the actions agents can perform, has few of the peculiarities of belief logics. Proofs
impose no limits on the number of simultaneous or resumed sessions. Isabelle's
automatic tools allow the proofs to be generated with a moderate eort, and they
run fast. The full TLS proof script runs in 150 seconds on a 300Mhz Pentium.
I obtained the abstract message exchange given in x2 by reverse engineering the
TLS specication. This process took about two weeks, one-third of the time spent
on this verication. SSL must have originated in such a message exchange, but I
could not nd one in the literature. If security protocols are to be trusted, their
design process must be transparent. The underlying abstract protocol should be
exposed to public scrutiny. The concrete protocol should be presented as a faithful
realization of the abstract one. Designers should distinguish between attacks
against the abstract message exchange and those against the concrete protocol.
All the expected security goals were proved: no attacks were found. This unexciting
outcome might be expected in a protocol already so thoroughly examined.
lines of reasoning were required, unlike the proofs of the Yahalom protocol
[Paulson 1997] and Kerberos IV [Bella and Paulson 1998]; we may infer that
TLS is well-designed. The proofs did yield some insights into TLS, such as the
possibility of strengthening client key exchange by including A's identity (x5).
The main interest of this work lies in the modelling of TLS, especially its use of
pseudo-random number generators.
The protocol takes the explicitness principle of Abadi and Needham [1996] to
an extreme. In several places, it requires computing the hash of 'all preceding
handshake messages.' There is obviously much redundancy, and the requirement
is ambiguous too; the specication is sprinkled with remarks that certain routine
messages or components should not be hashed. One such message, change cipher
spec, was thereby omitted and later was found to be essential [Wagner and Schneier
1996]. I suggest, then, that hashes should be computed not over everything but over
selected items that the protocol designer requires to be conrmed. An inductive
analysis can help in selecting the critical message components. The TLS security
analysis (tlsxF.1.1.2) states that the critical components of the hash in certicate
verify are the server's name and nonce, but my proofs suggest that the pre-master-
secret is also necessary.
Once session keys have been established, the parties have a secure channel upon
which they must run a reliable communication protocol. Abadi tells me that the
TLS application data protocol should also be examined, since this part of SSL once
contained errors. I have considered only the TLS handshake protocol, where session
are negotiated. Ideally, the application data protocol should be veried sep-
arately, assuming an unreliable medium rather than an enemy. My proofs assume
that application data does not contain secrets associated with TLS sessions, such
as keys and master-secrets; if it does, then one security breach could lead to many
others.
Previous verication eorts have largely focussed on small protocols of academic
interest. It is now clear that realistic protocols can be analyzed too, almost as a
matter of routine. For protocols intended for critical applications, such an analysis
should be required as part of the certication process.
ACKNOWLEDGMENTS
Martn Abadi introduced me to TLS and identied related work. James Margetson
pointed out simplications to the model. The referees and Clemens Ballarin made
useful comments.
--R
Prudent engineering practice for cryptographic protocols.
Kerberos version IV: Inductive analysis of the secrecy goals.
The TLS protocol: Version 1.0.
A Formal Analysis of the Secure Sockets Layer Protocol.
The SSL protocol version 3.0.
What do we mean by entity authentication?
Isabelle: A Generic Theorem Prover.
On two formal analyses of the Yahalom protocol.
The inductive approach to verifying cryptographic protocols.
Journal of Computer Security
An attack on a recursive authentication protocol: A cautionary tale.
Analysis of the SSL 3.0 protocol.
--TR
Prudent Engineering Practice for Cryptographic Protocols
An attack on a recursive authentication protocol. A cautionary tale
The inductive approach to verifying cryptographic protocols
Kerberos Version 4
What do we mean by entity authentication?
--CTR
Igor Sobrado, Evaluation of two security schemes for mobile agents, ACM SIGCOMM Computer Communication Review, v.31 n.2 supplement, April 2001
Isaac Agudo , Javier Lopez, Specification and formal verification of security requirements, Proceedings of the 5th international conference on Computer systems and technologies, June 17-18, 2004, Rousse, Bulgaria
Lawrence C. Paulson, Organizing Numerical Theories Using Axiomatic Type Classes, Journal of Automated Reasoning, v.33 n.1, p.29-49, July 2004
Martn Abadi , Bruno Blanchet, Computer-assisted verification of a protocol for certified email, Science of Computer Programming, v.58 n.1-2, p.3-27, October 2005
Giampaolo Bella, Availability of protocol goals, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
Hovav Shacham , Dan Boneh , Eric Rescorla, Client-side caching for TLS, ACM Transactions on Information and System Security (TISSEC), v.7 n.4, p.553-575, November 2004
Giampaolo Bella , Lawrence C. Paulson , Fabio Massacci, The verification of an industrial payment protocol: the SET purchase phase, Proceedings of the 9th ACM conference on Computer and communications security, November 18-22, 2002, Washington, DC, USA
Phan Minh Dung , Phan Minh Thang, Stepwise development of security protocols: a speech act-oriented approach, Proceedings of the 2004 ACM workshop on Formal methods in security engineering, October 29-29, 2004, Washington DC, USA
Steffen M. Hansen , Jakob Skriver , Hanne Riis Nielson, Using static analysis to validate the SAML single sign-on protocol, Proceedings of the 2005 workshop on Issues in the theory of security, p.27-40, January 10-11, 2005, Long Beach, California
Alec Yasinsac, An environment for security protocol intrusion detection, Journal of Computer Security, v.10 n.1-2, p.177-188, 2002
Giampaolo Bella, Inductive verification of smart card protocols, Journal of Computer Security, v.11 n.1, p.87-132, February
Karthikeyan Bhargavan , Ricardo Corin , Cdric Fournet , Andrew D. Gordon, Secure sessions for web services, Proceedings of the 2004 workshop on Secure web service, p.56-66, October 29-29, 2004, Fairfax, Virginia
Alwyn Goodloe , Carl A. Gunter , Mark-Oliver Stehr, Formal prototyping in early stages of protocol design, Proceedings of the 2005 workshop on Issues in the theory of security, p.67-80, January 10-11, 2005, Long Beach, California
Changhua He , Mukund Sundararajan , Anupam Datta , Ante Derek , John C. Mitchell, A modular correctness proof of IEEE 802.11i and TLS, Proceedings of the 12th ACM conference on Computer and communications security, November 07-11, 2005, Alexandria, VA, USA
Michael Backes , Thomas Gro, Tailoring the Dolev-Yao abstraction to web services realities, Proceedings of the 2005 workshop on Secure web services, November 11-11, 2005, Fairfax, VA, USA
Karthikeyan Bhargavan , Ricardo Corin , Cdric Fournet , Andrew D. Gordon, Secure sessions for Web services, ACM Transactions on Information and System Security (TISSEC), v.10 n.2, p.8-es, May 2007
Cristian Coarfa , Peter Druschel , Dan S. Wallach, Performance analysis of TLS Web servers, ACM Transactions on Computer Systems (TOCS), v.24 n.1, p.39-69, February 2006
Christoph Kreitz, Building reliable, high-performance networks with the Nuprl proof development system, Journal of Functional Programming, v.14 n.1, p.21-68, January 2004 | TLS;inductive method;proof tools;isabelle;authentication |
322588 | Constraint cascading style sheets for the Web. | Cascading Style Sheets have been introduced by the W3C as a mechanism for controlling the appearance of HTML documents. In this paper, we demonstrate how constraints provide a powerful unifying formalism for declaratively understanding and specifying style sheets for web documents. With constraints we can naturally and declaratively specify complex behavior such as inheritance of properties and cascading of conflicting style rules. We give a detailed description of a constraint-based style sheet model, CCSS, which is compatible with virtually all of the CSS 2.0 specification. It allows more flexible specification of layout, and also allows the designer to provide multiple layouts that better meet the desires of the user and environmental restrictions. We also describe a prototype extension of the Amaya browser that demonstrates the feasibility of CCSS. | INTRODUCTION
Since the inception of the Web there has been tension between
the "structuralists" and the "designers." On one hand,
structuralists believe that a Web document should consist
only of the content itself and tags indicating the logical structure
of the document, with the browser free to determine the
document's appearance. On the other hand, designers (un-
derstandably) want to determine the exact appearance of the
document rather than leaving it to the browser.
With W3C's recent championing of style sheets, this debate
has resulted in a compromise. The web document proper
should contain the content and structural tags, together with
a link to one or more style sheets that determine how the
document will be displayed. Thus, there is a clean separation
between document structure and appearance, yet the designer
has considerable control over the final appearance of
the document. W3C has introduced Cascading Style Sheets,
first CSS 1.0 and now CSS 2.0, for use with HTML documents
Despite the clear benefits of cascading style sheets, there are
a number of problems with the CSS 2.0 standard.
The designer lacks control over the document's appearance
in environments different from that of the designer. For
example, if the document is displayed on a monochrome
display, if fonts are not available, or if the browser window
is sized differently, then the document's appearance will
often be less than satisfactory.
The CSS 2.0 specification has seemingly ad hoc restrictions
on layout specification. For example, a document
element's appearance can often be specified relative to the
parent of the element, but generally not relative to other
elements in the document.
The CSS 2.0 specification is complex and sometimes vague.
It relies on procedural descriptions to understand the effect
of complex language features, such as table layout. This
makes it difficult to understand how features interact.
Browser support for CSS 2.0 is still limited. We conjecture
that this is due in part to the complexity of the specifica-
tion, but also because the specification does not suggest a
unifying implementation mechanism.
We argue that constraint-based layout provides a solution to
all of these issues, because constraints can be used to specify
declaratively the desired layout of a web document. They
allow partial specification of the layout, which can be combined
with other partial specifications in a predictable way.
They also provide a uniform mechanism for understanding
layout and cascading. Finally, constraint solving technology
provides a unifying implementation technique.
We describe a constraint-based extension to CSS 2.0, called
Constraint Cascading Style Sheets (CCSS). The extension
allows the designer to add arbitrary linear arithmetic constraints
to the style sheet to control features such as object
placement, and finite-domain constraints to control features
such as font properties. Constraints may be given a strength,
reflecting their relative importance. They may be used in
style rules in which case rewritings of the constraint are created
for each applicable element. Multiple style sheets are
available for the same media with preconditions on the style
sheets determining which is appropriate for a particular environment
and viewer requirements.
Our main technical contributions are:
A demonstration that constraints provide a powerful unifying
formalism for declaratively understanding and specifying
CSS 2.0. The most challenging aspects were how
to handle inheritance of properties such as font-size (we
use read-only variables) and cascading (we use constraint
hierarchies).
A detailed description of a constraint-based style sheet
model, CCSS, which is compatible with virtually all of
the CSS 2.0 specification. CCSS is a true extension of
CSS 2.0. It allows more flexible specification of layout,
and also allows the designer to provide multiple layouts
that better meet the desires of the viewer and environmental
restrictions.
A prototype extension of the Amaya browser that demonstrates
the feasibility of CCSS. The prototype makes use
of the sophisticated constraint solving algorithm Cassowary
[4] and a simple one-way binary acyclic finite-domain
solver based on BAFSS [12].
BACKGROUND
Cascading style sheets (CSS 1.0 in 1997 and CSS 2.0 in
were introduced by W3C in association with the HTML
4.0 standard. In this section we review relevant aspects of
CSS 2.0 [6] and HTML 4.0 [9].
CSS 2.0 and HTML 4.0 provide a comprehensive set of
"style" properties for each type of HTML tag. By setting
the value of these properties the document author can specify
how the browser will display that element. Broadly speaking,
properties either specify how to position the element relative
to other elements, e.g. text-indent, margin, or float, or
how to display the element itself, e.g. font-size or color.
Although the author can directly annotate elements in the
document with style properties, the intent is that the author
places this information in a separate style sheet, and the original
document either links to or imports the style sheet. Thus,
the same document may be displayed using different style
sheets and the same style sheet may be used for multiple doc-
uments, easing maintenance of a uniform look for a web site.
A style sheet consists of rules. A rule has a selector that
specifies the document elements to which the rule applies,
Simple Example
HREF="simple.css"
TYPE="text/css">
ID=h>Boring Quotes
<P ID=p> Stop reading this because
She said that
He said that
Figure
1: Example HTML Document
and declarations that specify the stylistic effect of the rule.
The declaration is a set of property/value pairs. Values may
be either absolute or relative to the parent element's value.
For instance, the style sheet
{ font-size: 13pt }
{ font-size: 11pt }
{ font-size: 90% }
Figure
2: simple.css
has three rules. The first uses the selector H1 to indicate that
it applies to all elements with tag H1 and specifies that they
should be displayed using a 13 pt font. The second rule specifies
that paragraph elements should use an 11 pt font. The
third rule specifies the appearance of text in a BLOCKQUOTE,
specifying that the font-size should be 90% of that of the surrounding
element.
We can use this style sheet to specify the appearance of the
HTML document shown in Figure 1. Notice the link to the
style sheet and that we have included an ID attribute for all
elements since we will refer to them later. 1
Selectors come in three main flavors: type, attribute, and
contextual. These may be combined to give more complex
selectors. We have already seen examples of a type selector
in which the elements are selected by giving the "type," i.e.
name, of their tag. The type "*" matches any tag.
Attribute selectors choose elements based on the values of
two attributes: CLASS and ID. Multiple elements may share
the same CLASS value, while the ID value should be unique.
Selectors may also refer to a number of pseudo-classes, in-
1 Marking all elements with ID attributes defeats the modularity and re-use
benefits of CSS; we over-use ID tags here strictly as an aid to discussing
our examples.
HEAD BODY
Figure
3: Document tree for the HTML of Figure 1.
cluding :first-child and :link.
Selection based on the CLASS and ID attributes provides considerable
power. Using the tags DIV and SPAN the author can,
respectively, group arbitrary elements into block-level and
in-line elements. By setting the CLASS and ID attribute of
the DIV and SPAN and providing the appropriate style rules,
they can precisely control the appearance of these elements.
Contextual selectors allow the author to take into account
where the element occurs in the document, i.e. its context.
They are based on the document's document tree, which captures
the recursive structure of the tagged elements. A context
selector allows selection based on the element's ancestors
in the document tree.
For instance, the preceding document has the document tree
shown in Figure 3. If we want to ensure the innermost block
quote does not have its size reduced relative to its parent, we
could use
{ font-size: 100% }
Less generally, we could individually override the font size
for the second BLOCKQUOTE using an ID selector:
{ font-size: 100% }
Many style properties are inherited by default from the el-
ement's parent in the document tree. Generally speaking,
properties that control the appearance of the element itself,
such as font-size, are inherited, while those that control
its positioning are not.
As another example, consider the HTML document shown
in
Figure
4. We can use a style sheet to control the width
of the columns in the table. For example, table.css (Fig-
ure 5) contains rules specifying that the class of medcol and
thincol have widths 30% and 20% of their parent table respectively
One of the key features of CSS is that it allows multiple style
sheets for the same document. Thus a document might be
displayed in the context of the author's special style sheet for
that document, a default company style sheet, the viewer's
Table
Example
HREF="table.css"
TYPE="text/css">
ID=t>
<COL ID=c3 CLASS=thincol>
<TD COLSPAN=2>
</
Figure
4: Example HTML Document
{
{
Figure
5: table.css
style sheet and the browser's default style sheet. This is handled
in CSS by cascading the style sheets.
Cascading, inheritance, and multiple style sheet rules matching
the same element may mean that there are conflicts among
the rules as to what value a particular style property for that
element should take. The exact details of which value is chosen
are complex. Within the same style sheet, inheritance is
weakest, and rules with more specific selectors are preferred
to those with less specific selectors. For instance, both of the
rules
{ font-size: 100% }
{ font-size: 100% }
are more specific than
{ font-size: 90% }
Between style sheets, the values set by the designer are preferred
to those of the viewer and browser, and for otherwise
equal conflicting rules, those in a style sheet that is imported
or linked first have priority over those subsequently imported
or linked. However, the style sheet author may also annotate
rules with the strength !important, which will override this
behavior. In this case, the viewer's style rule has precedence
over the designer's style rule.
Despite its power, CSS 2.0 still has a number of limitations.
One limitation is that a style property may only be relative to
the element's parent, not to other elements in the document.
This can result in clumsy specifications, and makes some reasonable
layout constraints impossible to express. For exam-
ple, it is not possible to require that all tables in a document
have the same width, and that this should be the smallest
width that allows all tables to have reasonable layout. With
CSS 2.0, one can only give the tables the same fixed size
or the same fixed percentage width of their parent element.
Similarly, it is not possible to specify that two columns in a
table have the same width, and that this should be the smallest
width that allows both columns to have reasonable layout.
The other main limitation is that it is difficult for the designer
to write style sheets that degrade gracefully in the presence
of unexpected browser and viewer limitations and desires.
For instance, the author has little control over what happens
if the desired fonts sizes are not available. Consider the style
sheet simple.css again. Imagine that only 10 pt, 12 pt and
14 pt fonts are available. The browser is free to use 12 pt
and 10 pt for headings and paragraphs respectively, or 14 pt
and 12 pt or even 12 pt and 12 pt. Part of the problem is that
rules always give definite values to style properties. When
different style sheets are combined only one rule can be used
to compute the value. Thus a rule is either on or off, leading
to discontinuous behavior when style sheets from the author
and viewer are combined. For instance, a sight-impaired
viewer might specify that all font sizes must be greater than
11 pt. However, if the designer has chosen sufficiently large
fonts, the viewer wishes to use the designer's size. This is
impossible in CSS 2.0.
CONSTRAINT CASCADING STYLE SHEETS
Our solution to these problems is to use constraints for specifying
layout. A constraint is simply a statement of a relation
(in the mathematical sense) that we would like to have
hold. Constraints have been used for many years in interactive
graphical applications for such things as specifying
window and page layout. They allow the designer to specify
what are the desired properties of the system, rather than how
these properties are to be maintained. The major advantage
of using constraints is that they allow partial specification of
the layout, which can be combined with other partial specifications
in a predictable way. In this section, we describe
our constraint-based extension to CSS 2.0, called Constraint
Cascading Style Sheets (CCSS).
One complication is that constraints may conflict. To allow
for this we use the constraint hierarchy formalism [3]. A
constraint hierarchy consists of a collection of constraints,
each labeled with a strength. There is a distinguished strength
labelled REQUIRED: such constraints must be satisfied. The
other strengths denote preferences. There can be an arbitrary
number of such strengths, and constraints with stronger
strengths are satisfied in preference to ones with weaker
strengths. Given a system of constraints, the constraint solver
must find a solution to the variables that satisfies the required
constraints exactly, and that satisfies the preferred constraints
as well as possible, giving priority to those more strongly
preferred. The choice of solution depends on the comparator
function used to measure how well a constraint is satisfied.
In our examples we shall assume weighted-sum-better. By
using an appropriate set of strength labels we can model the
behavior of CSS 2.0.
A Constraint View of CSS 2.0
Hierarchical constraints provide a simple, unifying way of
understanding much of the CSS 2.0 specification. This view-point
also suggests that constraint solvers provide a natural
implementation technique. Each style property and the
placement of each element in the document can be modeled
by a variable. Constraints on these variables arise from
browser capabilities, default layout behavior arising from the
type of the element, from the document tree structure, and
from the application of style rules. The final appearance of
the document is determined by finding a solution to these
constraints.
The first aspect of CSS 2.0 we consider is the placement of
the document elements (i.e., page layout). This can be modeled
using linear arithmetic constraints. To illustrate this, we
examine table layout-one of the most complex parts of CSS
2.0. The key difficulty in table layout is that it involves information
flowing bottom-up (e.g. from elements to columns)
and top-down (e.g. from table to columns). The CSS 2.0
specification is procedural in nature, detailing how this oc-
curs. By using constraints, we can declaratively specify what
the browser should do, rather than how to do it. Furthermore,
the constraint viewpoint allows a modular specification. For
example, to understand how a complex nested table should
be laid out, we simply collect the constraints for each com-
ponent, and the solution to these is the answer. With a procedural
specification it is much harder to understand the inter-action
Consider the style sheet table.css (Figure 5) and the associated
HTML document (Figure 4). The associated layout
constraints are shown in Figure 6. The notation #id[prop]
refers to the (variable) value of the property prop attached to
document element with ID id. Since we are dealing with a ta-
ble, the system automatically creates a constraint (1) relating
the column widths and table width. 2 Similarly there are automatically
created constraints (2-6) that each column is wide
enough to hold its content, and (7) that the table has minimal
width. Constraints (8) and are generated from the
style sheet. Notice the different constraint strengths: from
weakest to strongest they are WEAK, DESIGNER and RE-
QUIRED. Since REQUIRED is stronger than DESIGNER, the
column will always be big enough to hold its contents. The
WEAK constraint cannot be satisfied com-
pletely; the effect of minimizing its error will be to minimize
the width of the table but not at the expense of any of the
2 For simplicity, we ignore margins, borders and padding in this example.
Figure
Example layout constraints
Figure
7: Example finite domain constraints
other constraints.
These constraints provide a declarative specification of what
the browser should do. This approach also suggests an implementation
strategy: to lay out the table, we simply use
a linear arithmetic constraint solver to find a solution to the
constraints. The solver implicitly takes care of the flow of information
in both directions, from the fixed widths of the images
upward, and from the fixed width of the browser frame
downward.
Linear arithmetic constraints are not the only type of constraints
implicit in the CSS 2.0 specification. There are also
constraints over properties that can take only a finite number
of different values, including font size, font type, font
weight, and color. Such constraints are called finite domain
constraints and have been widely studied by the constraint
programming community. Typically, they consist of a domain
constraint for each variable giving the set of values the
variable can take (e.g., the set of font sizes available) and
required arithmetic constraints over the variables.
As an example, consider the constraints arising from the document
in Figure 1 and style sheet simple.css (Figure 2).
The corresponding constraints are shown in Figure 7. The
domain constraints (1-4) reflect the browser's available fonts.
The remaining constraints result from the style sheet rules.
Note that the third rule generates two constraints (7) and (8),
one for each block quote element.
Both of the preceding examples have carefully avoided one
of the most complex parts of the CSS 2.0 specification: what
Figure
8: Example of overlapping rules
to do when multiple rules assign conflicting values to an el-
ement's style property. As discussed earlier, there are two
main aspects to this: cascading several style sheets, and conflicting
rules within the same style sheet.
We can model both aspects by means of hierarchical con-
straints. To do so we need to refine the constraint strengths
we have been using. Apart from REQUIRED, each strength is
a lexicographically-ordered tuple
The first component in the tuple, cs, is the constraint importance
and captures the strength of the constraint and its
position in the cascade. The constraint importance is one of
WEAK, BROWSER, VIEWER, DESIGNER, DESIGNER-IMPORT-
ANT, VIEWER-IMPORTANT (ordered from weakest to strong-
est). The importance WEAK is used for automatically generated
constraints only. The last three components in the tuple
capture the specificity of the rule which generated the con-
straint: i is the number of ID attributes, c is the number of
CLASS attributes, and t is the number of tag names in the
rule.
As an example, consider the constraints arising from the document
in Figure 1 with the style sheet
{ font-size: 13pt }
{ font-size: 11pt }
{ font-size: 90% }
{ font-size: 100% }
The constraints and their strengths for those directly generated
from the style sheet rules are shown in Figure 8. Because
of its greater weight, the last constraint listed will dominate
the second to last one, giving rise to the expected behavior
The remaining issue we must deal with is inheritance of style
properties such as font size, and the expression of this inheritance
within our constraint formalism. For each inherited
property, we need to automatically create an appropriate constraint
between each element and its parent. At first glance,
these should simply be WEAK equality constraints. Unfor-
tunately, this does not model the inherent directionality of
inheritance.
For instance, imagine displaying the document in Figure 1
with the style sheet
{ font-size: 8pt }
Figure
9: Example of inheritance rules
where the default font size is 12 pt. The scheme outlined
above gives rise to the constraints shown in Figure 9. One
possible weighted-sum-better solution to these constraints is
that the heading is in 12 pt and the rest of the document
(including the paragraph) is in 8 pt. The problem is that
the paragraph element #p has "inherited" its value from its
child, the BLOCKQUOTE element #q1.
To capture the directionality of inheritance we use read-only
annotations [3] on variables. The intuitive understanding of
a read-only variable v in a constraint c is that c should not
be considered until the constraints involving v as an ordinary
variable (i.e., not read-only) have been used to compute v's
value.
To model inheritance, we need to add the inheritance equalities
with constraint importance of WEAK, and mark the variable
corresponding to the parent's property as read-only. The
read-only annotation ensures that the constraints are solved
in an order corresponding to a top-down traversal of the document
tree. Thus, the above example modifies the constraints
in
Figure
9 so that each font size variable on the right hand
side has a read-only annotation.
Extending CSS 2.0
We have seen how we can use hierarchical constraints to provide
a declarative specification for CSS 2.0. There is, how-
ever, another advantage in viewing CSS 2.0 in this light. The
constraint viewpoint suggests a number of natural extensions
which overcome the expressiveness limitations of CSS 2.0
discussed previously. We call this extension CCSS.
As the above examples indicate, virtually all author and
viewer constraints generated from CSS 2.0 either constrain
a style property to take a fixed value, or relate it to the par-
ent's style property value. One natural generalization is to allow
more general constraints, such as inequalities. Another
natural generalization is to allow the constraint to refer to
other variables, both variables corresponding to non-parent
elements and to "global" variables.
CCSS allows constraints in the declaration of a style sheet
rule. The CSS-style attribute:value pair is re-interpreted
in this context as a constraint equation, attribute = value.
We prepend all constraints with the constraint pseudo-
property so that CCSS is backwards compatible with browsers
supporting only CSS. In a style sheet rule the constraint can
refer to attributes of self, parent, and left-sibling.
For example:
{ constraint:
font-size <=
CCSS style sheets also allow the author to introduce global
constraint variables using a new @variable directive. A
variable identifier is lexically the same as a CSS ID attribute.
It has a type indicating its base unit, with automatic coercion
between units when required. The author can express constraints
among global constraint variables and element style
properties using a new @constraint directive. There are
also some global built-in objects available with their own attributes
(e.g., Browser) that can be used.
These extensions add considerable expressive power. For instance
it is now simple to specify that all tables in the document
have the same width, and that this is the smallest width
that allows all tables to have a reasonable layout:
@variable table-width:pt
{ constraint:
Similarly we can specify two columns c1 and c2 in the same
(or different) tables have the same width (the smallest for
reasonably laying out both):
@constraint
It also allows the designer to express preferences in case the
desired font is not available. For example adding
{ constraint: font-size >= 13pt }
{ constraint: font-size >= 11pt }
to simple.css (Figure 2) will ensure that larger fonts are
used if 13 pt and 11 pt fonts are not available.
Finally, a sight-impaired viewer can express the strong desire
to have all font sizes greater than 12 pt:
* {constraint: font-size >= 12pt !important}
As long as the font size of an element is 12 pt or larger it will
not be changed, but smaller fonts will be set to 12 pt.
Providing inequality constraints allows the author to control
the document appearance more precisely in the context
of browser capabilities and viewer preferences. Addition-
ally, CCSS allows the author to give alternate style sheets
for the same media. Each style sheet can list preconditions
for their applicability using a new @precondition direc-
tive. For efficiency, the precondition can only refer to various
pre-defined variables. The values of these variables will
be known (i.e. they will have specific values) at the time the
precondition is tested. For example:
@precondition Browser[frame-width] >= 800px
@precondition
Figure
10: Screen shots of our prototype browser. In the view on the left, a narrow style sheet is in effect because
the browser width 800 pixels, while on the right a wide style sheet is used. Interactively changing the browser width
dynamically switches between these two presentations. In both figures, the first column is 1the width of the second
column which is twice the width of the last column. On the left, the table consumes 100% of the frame width, but on the
right, the table width is the browser width minus 200 pixels. Also notice the changes in font size and text alignment.
We extend the style sheet @import directive to permit listing
multiple style sheets per line, and the first applicable sheet is
used (the others are ignored). If no style sheet's preconditions
hold, none are imported. Consider the example direc-
tive
@import wide.css, tall.css, small.css
If wide.css's preconditions fail, but tall.css's succeed,
the layout uses tall.css. If, through the course of, e.g., the
user resizing the top-level browser frame, wide.css's pre-conditions
later become satisfied, the layout does not switch
to that style sheet unless tall.css's preconditions are no
longer satisfied. That is, the choice among style sheets listed
with one directive is only revisited when a currently-used
style sheet is no longer applicable.
As an example consider a style sheet for text with pictures.
If the page is wide, the images should appear to the right of
the text; if it is narrow, they should appear without text to the
left; and if it is too small, the images should not appear at all.
This can be encoded as:
/* wide.css */
@precondition Browser[frame-width] > 550px
IMG { float: right}
/* tall.css */
@precondition Browser[frame-width] <= 600px
@precondition Browser[frame-height] > 550px
IMG { clear: both; float: none}
/* small.css */
IMG { display: none }
Preconditions become even more expressive in the presence
of support for CSS positioning [10] and a generalized flow
property [7].
IMPLEMENTATION
Prototype Web Browser
We have implemented a representative subset of our CCSS
proposal to demonstrate the additional expressiveness it provides
to web designers. Our prototype is based on version
1.4a of Amaya [8], the W3 Consortium's browser. Amaya
is built on top of Thot, a structured document editor, and
has partial support for CSS1. Amaya is exceptionally easy
to extend in some ways (e.g., adding new HTML tags), and
provides a stable base to build from.
Our support for constraints in Amaya covers the two main
domains for constraints that we have discussed: table widths
(for illustrating page layout relationships) and font sizes (for
illustrating the solving of systems involving inherited at-
tributes). In our prototype, HTML and CSS code can contain
constraints and declare constraint variables. In HTML
code, constraint variables, instead of specific values, can be
attached by name to element attributes (e.g., to the "width"
of a table column). 3 When the constraints of the document
force values assigned to variables to change, the browser updates
its rendering of the current page, much as it does when
the browser window is resized (which often caused the re-solve
in the first place).
We have also extended Amaya to support preconditions on
style sheets and the generalized "@import" CCSS rule. The
performance of switching among style sheets is similar to a
reload, and when the style sheets are cached on disk, is sufficiently
fast even for switching style sheets during an interactive
resize. 4 See Figure 10 for screen shots of an example
3 Due to limitations in Amaya's support for style sheets, the variables
must be attached to column width attributes in the HTML source instead of
being specified in the style sheet.
4 It may be useful to provide background pre-fetching of alternate stylesheets
to avoid latency when they are first needed.
using our prototype's support for both table layout and pre-
conditions. As the support for CSS improves in browsers,
more significant variations will be possible through the use
of our @precondition and extended @import directives.
We compared the performance of our prototype browser to
an unmodified version of Amaya 1.4a, both fully optimized,
running on a PII/400 displaying across a 10Mbit network to
a Tektronix X11 server on the same subnet. Our test case
was a small example on local disk using seven style sheets.
We executed 100 re-loads, and measured the total wall time
consumed. The unmodified browser did each re-load and re-render
in 190 ms, while our prototype took only 250 ms even
when sized to select the last alternative style sheet in each
of three @import directives. This performance penalty is
reasonable given the added expressiveness and features the
prototype provides.
One of the most important benefits of re-framing CSS as constraints
is that it provides an implementation approach for
even the standard CSS features. To simplify our prototype
and ensure it remains a superset of CSS functionality, we
currently do not treat old-style declarations as constraints,
but instead rely on the existing implementation's handling of
those rules. However, if designed into a browser from the
beginning, treating all CSS rules as syntactic sugar for underlying
constraints will result in large savings in code and
complexity. The cascading rules would be completely replaced
by the constraint solver's more principled assignment
of values to variables, and the display engine need only use
those provided values, and redraw when the solver changes
the current solution.
Constraint Solving Algorithms
The semantics of the declarative specification of the constraints
are independent of the algorithms used to satisfy
them. However, especially for interactive applications such
as a web browser, it is essential that algorithms exist that
are able to solve the constraint systems efficiently. Our implementation
uses two algorithms: Cassowary [4] and a restricted
version of BAFSS [12].
The Cassowary algorithm handles linear arithmetic equality
and inequality constraints. The collection of constraints may
include cycles (i.e. simultaneous equalities and inequalities
or redundant constraints) and conflicting preferences. Cassowary
is an incremental version of the simplex algorithm, a
well-known and heavily studied technique for finding a solution
to a collection of linear equality and inequality constraints
that minimizes the value of a linear expression called
the objective function. However, commonly available implementations
of the simplex algorithm are not really suitable
for interactive applications such as the browser described
above. In particular, Cassowary supports the weighted-sum-
better comparator for choosing a single solution from among
those that satisfy all the required constraints.
A weighted-sum-better comparator computes the error for a
solution by summing the product of the strength tuple and
the error for each constraint that is unsatisfied. To model the
CSS importance rules in a hierarchy of constraint strengths,
we encode the symbolic levels of importance as tuples as
well; for example, VIEWER-IMPORTANT is h1; 0; 0; 0; 0; 0i
and BROWSER is h0; 0; 0; 0; 1; 0i. Hence no matter what
scalar error a BROWSER constraint has, it will never be satisfied
if doing so would force a VIEWER-IMPORTANT constraint
to not be satisfied. Similarly the last three components
of the strength tuple are encoded as
BAFSS handles binary acyclic font constraints using a dynamic
programming approach. For the font constraints implied
by CSS, we are able to simplify the algorithm because
all of the constraints relate a read-only size attribute in the
parent element to the size attribute of a child element. Given
this additional restriction that all constraints are one-way, the
algorithm is simple: visit the variable nodes in topological
order and assign each a value that greedily minimizes the error
contribution from that variable.
Both constraint solvers are implemented within the Cassowary
Constraint Solving library [1].
RELATED WORK
The most closely related research is our earlier work on the
use of constraints for web page layout [5]. This system allowed
the web page author to construct a document composed
of graphic objects and text. The layout of these objects
and the text font size were described in a separate "lay-
out sheet" using linear arithmetic constraints and finite domain
constraints. Like CCSS, layout sheets had precondi-
tions, controlling their applicability.
The work reported here, which focuses on how to combine
constraint-based layout with CSS, is complementary to our
previous research. One of the major technical contributions
here is to provide a declarative semantics for CSS based on
hierarchical constraints; this issue was not addressed in our
prior work [5]. There are two fundamental differences between
layout sheets and CCSS. The first is that layout sheets
are not style sheets in the sense of CSS since they can only
be used with a single document. Constraints only apply to
named elements, and there is no concept of a style rule which
applies to multiple elements. The second is that in the system
of [5] there is no analogue of the document tree. The document
is modeled as a flat collection of objects, which means
that there is no notion of inheritance and almost all layout
must be explicitly detailed in the layout sheet.
5 This does not exactly match the CSS specificity rules. For example if
the error in a constraint with strength hWEAK; 0; 0; 1i is 10 times greater
than the error in a conflicting constraint with strength hWEAK; 0; 0; 2i, the
first constraint will affect the final solution. By choosing appropriate error
functions we can make this unlikely to occur in practice. However, the more
general constraint hierarchy support may actually permit more desirable interactions
rather than the strict strength ordering imposed by CSS.
Cascading style sheets are not the only kind of style sheet.
The Document Style Semantics and Specification Language
(DSSSL) is an ISO standard for specifying the format of
documents. DSSSL is based on Scheme, and provides
both a transformation language and a style language.
It is very powerful but complex to use. More recently, W3C
has begun designing the XSL style sheet for use with XML
documents. XSL is similar in spirit to DSSSL. PSL [13] is
another style sheet language; its expressiveness lies midway
between that of CSS and XSL. The underlying application
model for all three is the same: take the document tree of
the original document and apply transformation rules from
the style sheet in order to obtain the presentation view of the
document, which is then displayable by the viewing device.
In the case of XSL, the usual presentation view is an HTML
document whose elements are annotated with style properties
None of these other style sheet languages allow true con-
straints. Extending any of them to incorporate constraints
would offer many of the same benefits as it does for CSS,
namely, the ability to flexibly combine viewer, browser, and
designer desires and requirements, and a simple powerful
model for layout of complex objects, such as tables. The
simplest extension is to allow constraints in the presentation
view of the document. Providing constraints in the transformation
rules would seem to offer little advantage. In the case
of DSSSL a natural way to do this is to embed a constraint
solver into Scheme (as in SCWM [2]). In the case of XSL,
the simplest change would be to extend the presentation language
from HTML to HTML with CCSS style properties.
Regarding other user interface applications of constraints,
there is a long history of using constraints in interfaces and
interactive systems, beginning with Ivan Sutherland's pioneering
Sketchpad system [17]. Constraints have also been
used in several other layout applications. IDEAL [18] is an
early system specifically designed for page layout applica-
tions. Harada, Witkin, and Baraff [11] describe the use of
physically-based modeling for a variety of interactive modeling
tasks, including page layout. There are numerous systems
that use constraints for widget layout [14, 15], while
Badros [2] uses constraints for window layout.
CONCLUSIONS AND FUTURE WORK
We have demonstrated that hierarchical constraints provide
a unifying, declarative semantics for CSS 2.0 and also suggest
a simplifying implementation strategy. Furthermore,
viewing CSS from the constraint perspective suggests several
natural extensions. We call the resulting extension CCSS.
By allowing true constraints and style sheet preconditions,
CCSS increases the expressiveness of CSS 2.0 and, impor-
tantly, allows the designer to write style sheets that combine
more flexibly and predictably with viewer preferences and
browser restrictions. We have demonstrated the feasibility of
CCSS by modifying the Amaya browser. However, substantial
work remains to develop an industrial-strength browser
supporting full CCSS, in part because of Amaya's lack of
support for CSS 2.0. It seems likely that the Mozilla [16]
browser, with its substantial support for CSS 2.0 features,
will be an excellent implementation vehicle to test our CCSS
extensions once it is sufficiently stable.
Apart from improving the current implementation, we have
two principal directions for further extensions to CCSS. The
first is to increase the generality and solving capabilities
of the underlying solver. For example, style sheet authors
should be able to arbitrarily annotate variables as read-only
so that they have greater control over the interactions of
global variables. Additionally, virtually all CSS properties,
such as color and font weight, could be exposed to the constraint
solver once we integrate other algorithms into our
solving toolkit.
The second extension is to allow "predicate" selectors in
style sheet rules. These selectors would permit an arbitrary
predicate to be tested in determining the applicability of a
rule to an element in the document structure tree. Predicate
selectors can be viewed as a generalization of the existing
selectors; an H1 P selector is applied only to nodes n for
which the predicate
that holds. These predicate selectors would
allow the designer to take into account the attributes of the
selected element's parents and children, thus, for instance,
allowing the number of items in a list to affect the appearance
of the list (as in an example used to motivate PSL [13]).
A final important area for future work is the design, imple-
mentation, and user testing of graphical interfaces for writing
and debugging constraint cascading style sheets and web
pages that use them.
ACKNOWLEDGMENTS
We thank Bert Bos and H-akon Lie of the CSS group of the
W3 Consortium for early feedback on these ideas and our
proposed extensions to CSS. This research has been funded
in part by a US National Science Foundation Graduate Fellowship
for Greg Badros, and in part by a grant from the
Australian Research Council.
--R
The Cassowary linear arithmetic constraint solving algorithm: Interface and implementation.
Constraint hierarchies.
Solving linear arithmetic constraints for user interface appli- cations
Constraints for the web.
Amaya web browser software.
HTML 4.0 specification.
Positioning HTML elements with cascading style sheets.
Interactive physically-based manipulation of dis- crete/continuous models
Flexible font-size specification in web documents
Comprehensive support for graphical highly interactive user interfaces.
The Amulet environment: New models for effective user interface software development.
The Mozilla Organization.
--TR
Garnet
Constraint hierarchies
Interactive physically-based manipulation of discrete/continuous models
The Amulet Environment
Solving linear arithmetic constraints for user interface applications
Constraints for the web
A High-Level Language for Specifying Pictures
--CTR
Nathan Hurst , Kim Marriott , Peter Moulder, Cobweb: a constraint-based WEB browser, Proceedings of the twenty-sixth Australasian conference on Computer science: research and practice in information technology, p.247-254, February 01, 2003, Adelaide, Australia
Thomas A. Phelps , Robert Wilensky, The multivalent browser: a platform for new ideas, Proceedings of the 2001 ACM Symposium on Document engineering, November 09-10, 2001, Atlanta, Georgia, USA
Hiroshi Hosobe, Solving linear and one-way constraints for web document layout, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico
Greg J. Badros , Jeffrey Nichols , Alan Borning, Scwm: An Extensible Constraint-Enabled Window Manager, Proceedings of the FREENIX Track: 2001 USENIX Annual Technical Conference, p.225-234, June 25-30, 2001
Mira Dontcheva , Steven M. Drucker , Geraldine Wade , David Salesin , Michael F. Cohen, Summarizing personal web browsing sessions, Proceedings of the 19th annual ACM symposium on User interface software and technology, October 15-18, 2006, Montreux, Switzerland
Kim Marriott , Bernd Meyer , Laurent Tardif, Fast and efficient client-side adaptivity for SVG, Proceedings of the 11th international conference on World Wide Web, May 07-11, 2002, Honolulu, Hawaii, USA
Nathan Hurst , Kim Marriott , Peter Moulder, Toward tighter tables, Proceedings of the 2005 ACM symposium on Document engineering, November 02-04, 2005, Bristol, United Kingdom
Greg J. Badros , Alan Borning , Peter J. Stuckey, The Cassowary linear arithmetic constraint solving algorithm, ACM Transactions on Computer-Human Interaction (TOCHI), v.8 n.4, p.267-306, December 2001
Gil Loureiro , Francisco Azevedo, Constrained XSL formatting objects for adaptive documents, Proceedings of the 2005 ACM symposium on Document engineering, November 02-04, 2005, Bristol, United Kingdom
Greg J. Badros , Jojada J. Tirtowidjojo , Kim Marriott , Bernd Meyer , Will Portnoy , Alan Borning, A constraint extension to scalable vector graphics, Proceedings of the 10th international conference on World Wide Web, p.489-498, May 01-05, 2001, Hong Kong, Hong Kong
Charles Jacobs , Wilmot Li , Evan Schrier , David Bargeron , David Salesin, Adaptive grid-based document layout, ACM Transactions on Graphics (TOG), v.22 n.3, July
Frdric Bes , Ccile Roisin, A presentation language for controlling the formatting process in multimedia presentations, Proceedings of the 2002 ACM symposium on Document engineering, November 08-09, 2002, McLean, Virginia, USA
Fateh Boulmaiz , Ccile Roisin , Frdric Bes, Improving formatting documents by coupling formatting systems, Proceedings of the ACM symposium on Document engineering, November 20-22, 2003, Grenoble, France
John Stamey , Bryan Saunders , Simon Blanchard, The aspect-oriented web, Proceedings of the 23rd annual international conference on Design of communication: documenting & designing for pervasive information, September 21-23, 2005, Coventry, United Kingdom | constraints;style sheets;CSS;page layout;CCSS;cascading style sheets;world wide web;HTML;cassowary |
322863 | An Efficient Fault-Tolerant Multicast Routing Protocol with Core-Based Tree Techniques. | AbstractIn this paper, we design and analyze an efficient fault-tolerant multicast routing protocol. Reliable multicast communication is critical for the success of many Internet applications. Multicast routing protocols with core-based tree techniques (CBT) have been widely used because of their scalability and simplicity. We enhance the CBT protocol with fault tolerance capability and improve its efficiency and effectiveness. With our strategy, when a faulty component is detected, some pre-defined backup path(s) is (are) used to bypass the faulty component and enable the multicast communication to continue. Our protocol only requires that routers near the faulty component be reconfigured, thus reducing the runtime overhead without compromising much of the performance. Our approach is in contrast to other approaches that often require relatively large tree reformation when faults occur. These global methods are usually costly and complicated in their attempt to achieve theoretically optimal performance. Our performance evaluation shows that our new protocol performs nearly as well as the best possible global method while utilizing much less runtime overhead and implementation cost. | Introduction
Reliable multicast communication is critical for the
success of many applications such as video/audio-
broadcasting, resource discovery, server location, etc.
Recently, these applications have become increasingly
popular due to the availability of the Internet.
In general, there are two approaches for multicast
routing protocols over the Internet: source-based-tree
routing and shared-tree routing. Many protocols have been
developed including Distance-Vector Multicast Routing
Protocol (DVMRP)[WPD88], Multicast Extensions to
Open Shortest-Path First (MOSPF) [M94b], Protocol
Independent Multicast (PIM) [DEFJLW96], and Core
Based Tree Multicast (CBT) [BFC93] etc.
A problem associated with source-based-tree routing is
that a router has to keep the pair information (source,
group) and it is a ONE tree per source. In reality the
Internet is a complex, heterogeneous environment, which
potentially has to support many thousands of active
groups, each of which may be sparsely distributed, this
technique clearly does not scale.
Shared tree based multicast routing is more scalable
than source-based-tree routing. For example, in
comparison with the source-based-tree approach, a shared
tree architecture associated with the CBT method offers an
improvement in scalability by a factor of the number of
active sources. Because of its scalability and simplicity,
core-based tree multicast protocols have been widely used
in many multicast systems.
However, the core-based-tree method may have a
reliability problem. Without any enhancement, a single
point of failure on the tree will partition the tree and hence
make it difficult, if not impossible, to fulfill the
requirement of multicasting [B97]. While various
solutions have been proposed to address the problem, they
usually require relatively large tree reformation when
faults occur [BFC93 and B97]. A global strategy of this
type can be costly and complicated.
In this paper, we aim at enhancing the CBT protocol
with fault-tolerant capability and improving its
performance in terms of packet delay and resource
consumption. Our approach can be briefly summarized as
follows:
localized configuration methodology is used. When
a faulty component in the network is detected, some pre-defined
backup path(s) is (are) used to bypass the faulty
component and thus enable the multicast communication
to continue. Backup paths can be identified off-line. At
runtime, our protocol will only require that routers on the
backup path(s) be reconfigured. In this way, we are able to
reduce the runtime overhead of the protocol without
compromising a significant amount of the performance.
Traditional CBT method routes a multicast packet
from its source towards the core of the shared tree. In
many situations, this could create traffic congestion (near
the core). We propose to route a multicast packet from its
source to the nearest node on the tree. This eliminates
potential congestion problem and improves the network
performance such as packet delay and resource
consumption.
Faults in the network occur randomly at runtime. In a
faulty situation, several routers may detect the fault and
initiate reconfiguration processes. A protocol should be
consistent in the sense that no matter how faults occur and
are detected, routers co-operate and restore the network in
an agreeable and operational state. In our protocol, the
functionality of routers during the fault management
process is clearly specified in accordance with the status
information available to the router. As a result, while
routers act asynchronously in a distributed environment,
the consistency requirement is met.
We evaluate the performance of our new fault-tolerant
multicast protocol. Performance data in terms of packet
delay and resource consumption are collected. They
indicate that in normal (i.e., non-faulty) situations, our
protocol outperforms the traditional core-based tree
protocol due to the fact that our protocol is capable of
eliminating potential traffic congestion. In the case when
faults do occur, our protocol performs very closely to the
best global reconfiguration method that provides the
theoretical performance bound.
2. Models and Notations
The network we consider consists of a number of nodes
(e.g., routers and hosts). Nodes are connected by physical
(dual directional) links along which packets can be
transmitted. Each link has an attribute called delay. A
network is modeled as a graph N(V, E) where V is a finite
set of vertices in N, representing nodes in the network
concerned; E is a finite set of edges, representing the links
between nodes.
A node (say R) is next hop from another node (say R')
if R can receive a packet directly from R' without going
through any other router. The key data structure which a
router uses for routing is routing table. An entry in a
routing table usually consists of fields for destination
address, next hop, distance, etc. For an incoming packet,
the router locates an entry in a routing table such that the
destination address of the packet matches the destination
address of the entry. In a router, once the next hop of a
packet is determined, the packet will be transported to a
proper output interface where the packet will be
transmitted into the associated output link, which, in turn,
connects to the next hop.
Obviously, if the network status is changed (e.g., some
link fails, some router joins, etc), the routing tables of
routers in the network may need to be updated. We say
that a router is reconfigured if its routing table is updated
(in accordance to some protocol).
Routers in the network cooperatively decide a path for
a packet and transmit the packet along the path. Formally,
P(X, Y) denotes a path from X to Y where X and Y are
nodes. Sometimes, we would like to list explicitly the
sequence of nodes in a path. We use terms "route" and
"path" interchangeably. d(P(X, Y)) denotes the total
distance of links on path P(X, Y). It is usually defined by a
numeric sum of the individual link distances.
A shortest path from X to Y is usually denoted as SP(X,
Y). That is, among all the path between X and Y, d(P(X, Y))
In this paper, we assume that for given X and Y, the
shortest path between them is unique. This assumption
simplifies our analysis but can be easily removed.
A packet is specified by addresses of its source and
destination. The source of a packet is usually a host. The
destination for a multicast packet is denoted as G that
represents a group of designated recipient hosts. That is, a
packet with multicast address G should be sent to all the
hosts in the recipient group.
At runtime, network components (e.g., links and
routers) can fail. We assume that the faulty state of a
component can be detected by (some of) its neighboring
routers. This can be achieved by a "keep-alive"
mechanism operating between adjacent (directly linked)
routers. A keep-alive mechanism may be implemented by
means of ICMP echo request/reply messages [D91].
3. Fault-Tolerant Multicast Protocol
3.1.
Overview
As stated earlier, our strategy is to enhance the existing
CBT protocol so that it will have fault tolerance capability
and at the same time its effectiveness and efficiency are
improved. Design and implementation of such a protocol
is, nevertheless, a challenging task. There are three
primary objectives:
Network performance. One of the protocol objectives
is to optimize various network performance metrics such
as the message delay, resource usage, etc.
Runtime overhead. To make the protocol fault-
tolerant, fault management function may be invoked at
runtime. Thus, the overhead of this function should be
minimized in order for the network to provide the best
possible services to the payload applications.
Consistency. The protocol should ensure that no
matter how a fault occurs and is detected, routers co-operate
and restore the network in an agreeable and
operational state. The consistency issue will be addressed
in Section 3.4.
While all these objectives are important, they may
conflict with each other. For example, reducing the
runtime overhead may compromise the network
performance. In our design of the protocol, we take a
balanced near-optimal performance and
at the same time we take measures to reduce runtime
overhead and to guarantee consistency.
Our fault-tolerant multicast routing protocol can be
divided into two parts:
Transmission Sub-Protocol that is responsible
for delivering multicast packets;
Management Sub-Protocol that will detect
faults, reconfigure the network, and hence provide
necessary infrastructure for the packet transmission sub-protocol
to achieve its mission.
3.2. Packet Transmission Sub-Protocol
Many protocols have been proposed and analyzed for
transmitting multicast packets. Proposed in [BFC93], the
Core-Based Tree Protocol (CBT) is a multicast routing
protocol that builds a single delivery tree per group that is
shared by all of the group's sources and receivers. An
advantage of the shared-tree approach is that it typically
offers more favorable scaling characteristics than all other
multicast algorithms [M94a, M94b, WPD88]. Because of
this, we choose CBT as our baseline protocol and intend to
enhance it with fault-tolerance capability and improve its
efficiency and effectiveness.
Step 1. Selecting a core for a given multicast group;
Step 2. For each member in the multicast group, locating
the shortest path from the member to the core;
Step 3. Merging the shortest paths identified in Step 2.
At runtime, when a source generates a multicast packet
the packet is first transmitted from the source to
(somewhere of) the tree. Once on the tree, the packet is
dispatched to all the branches of the tree and delivered to
all the receivers.
An interesting problem is what path to use for
transmitting a packet from its source to the tree. In
[BFC93], it is recommended that the shortest path from the
source to the core of the tree should be used. We call it the
"SP-To-Core" method. This method is simple, but it may
cause traffic congestion on the links close to the core
because the traffic from different sources is concentrated
there.
To improve network performance, we propose a new
method: For an off-tree router, we first find the shortest
paths from the router to all the nodes on the multicast tree.
Then, we select the path that is the shortest among these
shortest paths and use it to route a multicast packet from
this router to the tree. Because our method uses the
shortest of the shortest paths to the tree, we call it "SSP-to-
Tree".
Our method may appear to be more complex. But it
merely will take more off-line time to collect locations of
nodes and compute the shortest paths. Once the route is
determined, the runtime overhead is the same as the SP-to-
method.
Nevertheless, our new method may eliminate the
potential problem of traffic congestion. Figures 3-1 and 3-
2 show the traffic flow in a network with these two
methods. It is clear that with SSP-To-Tree method, the
bandwidth usage is better balanced and the traffic
congestion is removed.
3.3. Fault Management Sub-Protocol
Recall that Fault Management Sub-Protocol (FMSP) is
responsible for detecting faults and reconfiguring the
network once faults are detected. Thus, it provides
necessary infrastructure for the Packet Transmission Sub-Protocol
to deliver multicast packets. In this sub-section,
we will focus on the technique for handling single fault
that occurs on the Core-Based Tree.
3.3.1. General Approaches
Consider that at runtime a component (link or router) on
the Core-Based Tree becomes faulty. To continue
multicast communication, alternative routes for the
multicast packets, that used to be transmitted through the
faulty component, must be utilized. Two approaches can
be taken. With a global approach, the faulty status will be
informed to all the routers in the network. Consequently,
based on the faulty status the core-base tree may be rebuilt
and (potentially all) the routers may be reconfigured. Note
that all these operations have to be performed on-line.
Thus, while this may help to achieve theoretically optimal
performance, the runtime overhead (including the
notification of the faulty state and reconfiguration of
routers) may be too large to make this approach practical.
We take a local approach. Rather than rebuilding the
core-base tree and reconfiguring all the routers, we will
use pre-defined backup paths to bypass the faulty
component. We then just reconfigure the routers that are
on the involved backup paths. All the packets that were
supposed to be transmitted over the faulty link will be
routed via the backup path(s).
Obviously, our local approach is simple, and involves
very small runtime overhead in comparison with the global
approach. The performance evaluation in Section 4 will
show that our local reconfiguration approach performs
closely to the best possible global approach in most cases.
The fault management sub-protocol involves the following
tasks:
Initialization. The task here is to select backup paths.
detection. Assume that each router is continuously
monitoring the status of upstream link and router and
hence is able to determine if they are in a faulty state.
Backup path invocation. Once detecting a fault of
upstream link, the router should start notifying this state
information to all the routers on its backup path so that
they are ready to be used.
Router configuration. After all the routers on a backup
path confirm their readiness, they will be configured in
order to let traffic re-route via the backup path.
Before we describe each of the above tasks in detail, we
will first discuss the methods used for router
configuration. As we will see, these reconfiguration
methods have impact on the functions of other tasks.
3.3.2. Configuration Methods
We consider two methods to configure the routers on a
backup path. They differ in overhead and potential
performance. The first method is the virtual repair
R 1 R 2 and R 3 are source routers, each transmits 1 MBS multicast traffic. As a result, the
bandwidth usage on link from R 5 to R 7 is 3MBS, significantly higher than others.
Figure
3-1. A Traffic Flow with SP-To-Core Method
Link on CBT Link taken by off tree multicast traffic Other Link
3mbs
1mbs
1mbs
1mbs
R 11
R 7
R 8
R 4
R 5
R 6
core
R 9
R 12
are source routers, each transmits 1 MBS multicast traffic.
Due to the SSP-To-Tree method, the bandwidth usage on links is better balanced.
Figure
3-2. Traffic Flow with SSP-To-Tree Method
Link on CBT Link taken by off tree multicast traffic Other Link
1mbs
1mbs
1mbs
1mbs
R 7
R 4
R 5
R 6
1mbs 1mbs
core
R 8 R 9
R 12
method. With this method, no routing table is to be
changed on the invoked backup path. Instead, a pre-programmed
agent will be installed at the two end routers.
The agent will encapsulate a multicast packet, which was
supposed to be transmitted via the faulty component. The
encapsulated packet will be source-routed (via. the backup
path) to the other end of the backup path. The agent at the
other end of the backup path, once receiving the
encapsulated packet, will de-encapsulate it and transmit
along the normal path(s) where the packet should be
dispatched. Thus, the topology of the tree is virtually
unchanged, except that the faulty component is bypassed.
The second method is called the real repair method.
With this method, all the routing tables on the backup path
will be changed to reflect the new topology of the tree.
Packets are routed according to new routing table.
Figure
3-3 shows an example of using these two
methods. In Figure 3-3 (a) shows a portion of the network
with the original core-based tree. Assume that there is a
fault on the link between R 4 and R 6 . Let the backup path
that is used to reconnect the disjoint tree be <R 6 , R 5 , R 3 >.
Figure
3-3 (b) shows the situation after virtual repair. In
this case, the agent on R 3 will encapsulate the multicast
packets and source-route encapsulated packets to R 6 via
R 5 . Vice versa for the packets from R 6 to R 3 . However, R 3
still has to send multicast packets to R 5 . Hence, the load
between R 3 and R 5 is doubled because the tree is virtually
repaired. The situation will improve when the real repair
method is used as shown in Figure 3-3 (c). In this case, R 5
and R 6 will be reconfigured to recognize that while R 5
continues to be a son of R 3 , R 6 is now a new son of R 5 .
Hence, the packets between R 3 and R 5 will not be
transmitted twice.
Clearly, the virtual repair method is simple and can
quickly restore a path. But, it may utilize extra bandwidth
and cause longer delay because the routers on the path are
not configured to take the advantage of the new topology.
On the other hand, the real repair method may produce
better performance (in terms of packet delay, for example).
But this method is complicated and takes more runtime
overhead during the reconfiguration process. backup path
they can use. We discuss this topic next.
3.3.3. Selection of Backup Paths
3.3.3.1. Backup Paths with the Virtual Repair Method
For the sake of simplicity, we will first consider the
situation where fault only occurs on the core-based tree
and there is at most one fault at a time.
First, we need to introduce some notations. For any two
routers (R and R') on the core-base tree, R is a son of R'
and R' is the father of R if there is a link between R and R'
and R' is (in terms of distance) closer to the core than R.
R" is the grandfather of R if R" is the father of R'and R' is
the father of R. Note that the core router has neither father
nor grandfather. The sons of the core router have father
(which is the core), but have no grandfather. All other
routers have both father and grandfather.
One of the sons of the core is selected to be the backup-
core that will become core if the core fails. How to select
the backup-core is irrelevant to the function of the fault
management. In practice it may be selected from network
administrative point of view, as suggested in selecting the
core [BFC93].
With this method, every router on the tree, except the
core, owns a pre-defined backup path. For a router that has
grandfather (i.e., the one that is not a son of the core), its
backup path is a path that connects itself to its grandfather.
A constraint on the backup path of a router is that the path
does not contain the father of the owner. In Figure 3-4, for
example, <R 7 , R 4, R 2 , R 3 > cannot be the backup path of R 7
because it contains R 4 which is the father of R 7 . But <R 7 ,
can be the backup path of R 7 .
For a router that has no grandfather, its backup path is a
path that connects this router to the backup-core. In Figure
3-4, if the backup core is R 3 , then the backup path of R 2
could be <R 2 , R 3 >. For the backup-core router, its backup
path is a path that connects itself to the core, but bypasses
the link between itself and the core. In Figure 3-4, if the
backup-core is R 3 , then the backup path of R 3 could be
We assume that for each router on the tree (except the
core), at least one backup path exists. It easy to verify that
if a non-core router has no backup path, then the network
is not single fault tolerable. For a router, if multiple
backup paths exist, we select the one with the shortest
distance.
The routers on a backup path can be divided into three
kinds, namely owner, terminator, and on-path routers,
Figure
3-3. Reconfiguration Methods
(c) The CBT after Real Repair
(a) The Original CBT (b) The CBT after Virtual Repair
Link on CBT Other Link
Link on virtual path
Link with fault
x
core
R
x
R 7
R 5
R 8
R
R 9
R core
R
x
R 7 R
R 9
R core
R
x
R 9
R
R 6
Figure
3-4. Backup Path with the Virtual RepairMethod
Faulty Router Link on CBT Other Link
depending on their function in the fault management. The
first router on the backup path is the owner of backup path.
The router at the other end of a backup path is called
terminator. Other routers on the backup path excluding the
owner and terminator routers are called on-path routers.
3.3.3.2. Backup Paths with the Real Repair Method
As discussed above, with the virtual repair method the
shortest path from a router to its grandfather is used as its
backup path. One would think that we could define the
backup path in the same way for the real repair method.
Unfortunately, this idea does not work as shown by the
examples illustrated in Figure 3-5.
Figure
3-5 shows a portion of the network with the
core-based tree. Assume that there is a faulty router, R 4 .
Because of this, sub-trees T 1 and T 2 are disconnected from
the original core-based tree. In Figure 3-5, let the shortest
path from R 7 to its grandfather (R 3 ) be <R 7
and the shortest path from R 8 to its grandfather be
It is easy to see in Figure 3-5 that if these
two shortest paths were used as backup paths, a loop <R 3 ,
would occur. The example show that
if the backup path of a router transverses another
partitioned sub-tree, a loop may occur. Thus, the selection
of backup path with the real repair method is not a trivial
task. Before stating our selection method, we need to
establish some properties of the core-based tree.
Assume that there is a faulty router, R, on the core-based
tree. Because of this, the core-based tree is split into
m+1 sub-trees, namely T 0 , is the sub-tree
that contains the core, and T i m) is the sub-tree
whose root is a son of the faulty router. Let the
routers on T i be indexed R i,j . In particular, R i,0 is the root
of T i .
Let P i be the shortest path from R i,0 to the father of R.
relation as follows: T i T if and only if:
contains a router (say, R j,k ) that belongs to T j and
for any other R j',k' that is on T j' and is contained in P i ,
R j,k is closer to R i,0 than R j',k' is.
In this case, we say T
T is the first sub-tree which transverses, except T i itself.
For this relation, we have the following results.
Lemma 3-1. The relation has the following properties:
Property A. The relation is not cyclic. That is, there is
no subset of sub-trees (say, T k1 , T k2 , ., T kh ) such that T k1
T k2 T k3 . T kh T k1 .
Property B. Sub-tree T 0 does not relate itself to any other
Property C.Every other sub-tree T i (i > 0) uniquely relates
itself to some other sub-tree. That is, there is an unique T j
(j i) such that T i T j.
With this lemma, we can get the following theorem:
Theorem 3-1. For every sub-tree T i (i > 0), either
or there is a unique non-empty sequence of sub-trees <T i1 ,
such that
Proof. If then the theorem is proved. Assume (3-
1) is not true. By Property C, T i has to relate itself to some
sub-tree, say (T are
done. Otherwise, T i1 has to relate to another sub-tree, say
. We can keep doing this until we
reach T ih that does not relate to any other sub-tree. The
termination of this process is guaranteed because is not
cyclic and there are finite number of sub-trees. Now, T ih
must be T 0 . Otherwise we violate Property C. The
sequence <T i1 , T i2 , ., T ih-1 > satisfies (3-2). Again by
Property C, this sequence has to be unique.
By Theorem 3-1, we have the following algorithm to
select backup paths with the real repair method.
Let P be the shortest path from R i,0 to its
grandfather.
the tail part of P such that it
terminates when it first reaches a router on T 0 . Else if T i
the tail
part of P such that it terminates when it first reaches an
router on T i1 .
The remaining of P is the backup path for R i,0 .
Once again, the first router of the backup path is called
the owner, the last one is the terminator, and the others
between them are on-path routers. Consider the example in
Figure
3-5. Using the above algorithm, we will select <R 7 ,
as the backup path of R 7 . R 7 is the owner, R 10 is
an on-path router, and R 11 is the terminator. Note that if
the virtual repair method is used, the backup path will be
much longer because over there we do not trim the path.
It is obvious that Theorem 3-1 guarantees that for every
son of the faulty router R, a backup path can be identified
with the above procedure. Trimming is necessary to avoid
the loops as shown in Figure 3-5.
3.3.4. Backup Path Invocation
Backup path invocation is the key part of fault
management algorithm. For the sake of completeness,
Figure
3-6 shows the entire fault Management algorithms
which are executed by different kinds of routers. These
algorithms are executed in the routers concurrently with
other tasks the routers have. In case a router plays multiple
Figure
3-5. Backup Path With the Real Repair Method
Link on CBT Other Link T
Faulty Router
R 4
R
R 7
R
R 8
R 12
R 4R 3
roles, these algorithms will be executed simultaneously in
the router.
It is clear from the above discussion that only the
routers on a backup path need to be re-configured in order
to repair a fault that occurs on the tree. This will result in
very small runtime overhead and can be scaled to large
networks. This is the advantage of our local configuration
approach.
3.4. Discussion
Our fault tolerant multicast protocol has the following
properties
In a normal situation (e.g., without any fault), our
protocol operates as a CBT protocol.
After the backup path is established, a tree is formed.
It consists of the backup path and all the links and routers
on the original tree except the faulty link or router.
For the newly formed tree, if the original core is not
fault, it will still be the core for the new tree. Otherwise,
the core of the new tree will be the backup-core.
Nevertheless, the above properties imply that while the
routers act asynchronously, our fault management sub-protocol
guarantees to bring the system into an agreeable
and operational state after a fault is detected. Thus, the
consistency requirement is met.
We would also like to argue that in addition to the
consistency requirement, our other design objectives
(stated in Section 3.1) are also well satisfied. Our local
approach obviously reduces the runtime overhead without
compromising much of the performance. Our SSP-To-
Tree method used in the packet transmission sub-protocol
eliminates the problem of potential traffic congestion and
will improve the delay performance. In Section 4, we will
show performance data that will quantitatively justify the
above claims.
Finally, we would like to say that our protocol can be
easily extended to deal with the case of multiple faults
that will not only impact paths on the CBT but also off-
tree ones. Due to the space limitation, we can not discuss
the extension in detail. The interested readers can refer
our paper [XXJZ99].
4. Performance Evaluation
4.1. Simulation Model
In this section, we will report performance results of
the new protocol introduced in this paper. To obtain the
performance data, we use a discrete event simulation
model to simulate data communication networks. The
simulation program is written in C programming language
and runs in a SUN SPARC work station 20. The network
simulated is the ARPA network [PG97]. During the
simulation, the multicast packets are randomly generated
as a Poisson process. Faults are also randomly generated
with X being the average life-time of a fault and Y being
the average inter-arrival time of faults. Thus, we have
Prob(The system is in a faulty
That is, Pf is the probability that the system is in a faulty
state. We will measure the network performance as
function of Pf. We are interested in the following metrics:
Average end-to-end delay (or average delay in short):
The end-to-end delay of a packet is the sum of the delays
at all the routers through which the packet passes.
Network resource usage: This is defined as the total
number of hops that (copies of) a multicast packet travel in
order to reach all the members in the multicast group.
Four systems are simulated:
SPP-To-Tree/V.R. In this system, our newly proposed
fault-tolerant multicast communication protocol is
simulated. For the router configuration method, the virtual
repair method (V.R.) is used.
SPP-To-Tree/R.R. This system is the same as SPP-To-
V.R. except that the real repair method (R.R.) is used.
SPP-To-Tree/N.F. In this system, our newly proposed
fault-tolerant multicast communication protocol is
simulated. But no fault is generated.
SP-To-Core/N.F. In this system, the original CBT
protocol is used. No fault is generated in the simulation.
We are interested in SP-To-Core/N.F. because it uses
the original CBT protocol. We take it as a baseline system.
All the performance measures of SPP-To-Tree/V.R., SPP-
To-Tree/R.R., and SPP-To-Tree/N.F. will be normalized
by the corresponding data of SP-To-Core/N.F. Thus, the
data we reported will be relative ones, relative to SP-To-
4.2. Performance Observations
The results of the average delay metric are shown in
Figure
4-1, while the results of network resource usage
metric are shown in Figure 4-2. In Figure 4-2, the
performance curve of SSP-to-Tree/N.F is virtually covered
by that of SSP-to-Tree/R.R, and is not easily visible.
From these data, we can make the following observations:
As expected, the SSP-to-Tree/N.F system achieves
better performance than the SP-to-Core/N.F. For example,
in
Figure
4-1, the average of relative delay of SSP-To-
Tree/N.F is 0.899. That is, on average the delay of SSP-
To-Tree is only 89.9% of that by SP-To-Core. Similarly,
in
Figure
4-2, the network resource usage of SSP-To-
Tree/N.F is 0.928, meaning the delay of SSP-To-Tree is,
on average, 92.8% of SP-To-Core/N.F.
Upstream
link/router
alive?
Initialization
msg(backup_path
to the upstream on the
backup path
Receive
msg(positive_conf
Reconfigure this router
yes
no
Receive
msg(backup_path
Initialization
Forward
to the upstream on the
backup path
Receive
Forward themsg and
reconfigure this router
yes
no
yes
Receive
msg(backup_path
Initialization
to the downstream on the
backup path
Reconfigure this router
yes
no
(a) Fault management
algorithm for
backup path owner
(b) Fault management
algorithm for
on-path router
(c) Fault management
algorithm for
backup path terminator
yes
no
no
Figure
3-6. Fault Management Algorithms
In the case of low probability of fault (say, Pf < 10%),
both SSP-to-Tree/V.R and SSP-to-Tree/R.R perform
almost identical to SSP-to-Tree/N.F. As we mentioned
earlier, SSP-To-Tree/N.F. provides a lower bound that any
(global) fault management algorithm can achieve. Hence,
we claim that when the fault probability is not too high,
our Fault Management Sub-Protocol with a localized
approach performs almost identically as the best possible
global one can. Meanwhile, our localized approach would
involve with low runtime overhead.
when the probability of fault becomes very large (i.e.,
greater than 10%), the performance of both the SSP-To-
Tree/V.R and SSP-To-Tree/R.R is clearly impacted. The
greater the Pf value, the worse the end-to-end delay and
resource usage are. Specifically, the delay performance
increases much more rapidly than the network resource
usage as Pf increases. This is because as more faults occur,
less functional links and routers are available. Hence,
some functional links and routers may become congested.
We note that 10% fault probability is really high and is
unlikely to happen in reality.
The system that uses the real repair method (SSP-To-
Tree/R.R.) always performs better than that with the
virtual repair method (SSP-To-Tree/V.R. This coincides
with our intuition because the real repair method explicitly
takes into account the new topology after a fault occurs.
and hence better utilizes the system resources.
5. Final Remarks
We have proposed and analyzed a new fault-tolerant
multicast communication protocol. Our protocol consists
of two sub-protocols: the Packet Transmission Sub-Protocol
and Fault Management Sub-Protocol. The Packet
Transmission Sub-Protocol uses an improved version of
the original CBT protocol. While maintaining the same
level of scalability, our improved CBT protocol has much
better performance because of the SSP-To-Tree
technology. For the Fault Management Sub-Protocol, we
take a localized approach that has a relatively low runtime
overhead. Our performance evaluation indicates that it
performs very closely to the possible theoretical bound.
Several extensions to our work are possible. We may
apply the technology we developed for the fault-tolerant
CBT protocol to anycast messages, and consequently
develop an integrated protocol for both multicast and
anycast messages. This protocol should be useful in
practice. For example, in a group of replicated database
servers, the multicast packets must be sent to all the
members in order to maintain information consistency. A
request of clients can be taken as an anycast message and
can be delivered to any of the server members.
Our protocol can also be extended to the applications
where the messages have both fault-tolerant and real-time
requirements. The key issue here is to model the traffic on
the shared multicast tree so that a delay bound can be
derived [MZ94, XJZ98].
Appendices
A-1. Proof of Lemma 4-1
Proof. By definition of T 0 , Property B is evident. Because of the
uniqueness of the shortest path, Property C is also obvious. Here,
we focus on the proof of Property A using contradiction.
Assume Property A is not true. That is, there is a sequence of
sub-tree <T k1 , T k2 , ., T k(n-1) , T kn , T k(n+1) .T kh > such that T k1
T k2 . T k(n-1) T kn . T kh T k1 . See Figure A-1.
Note that T 0 is the sub-tree that contains the core.
For a sub-tree (say, T kn , 1 < n < h) in the subset, let R kn,0 be
the root, P kn be the shortest path from R kn,0 to the father of the
faulty router (In figure A-1, R 0,1 is the father of R f ). Because T kn
Assume that R k(n+1),1 is the first
router in T k(n+1) encountered by P kn . Similarly, we assume that
1 For the convenience of discussion, (n+1) presents an
addition operation with mod n. That is,
Figure
A-1. Sub-Trees and Backup Paths with a Faulty Router
R Fault Router Link on CBT Backup Path T .T kn .T kh Sub-trees
core
R 0,0
R 0,2
R kh,1 R kh,2
R kh,0
R k(n+1),0
R k(n+1),2
R kn,1
R k(n-1),2
R k(n-1),1
R k(n-1),0
R k1,0
R k1,1 R k1,2
R
R kn,0
R kn,2
R 0,1
R k(n+1),1
Figure
4-1. Average Delay Relative to SP-to-Core/N.F1.01.41.80.01% 0.06% 0.32% 1.78% 10.00% 56.23%
Probability of Fault
era
ge
Del
ay
SSP-to-Tree/V.R
SSP-to-Tree/R.R.
SSP-to-Tree/N.F.
Figure
4-2. Network Resource Usage relative to SP-to-Core/N.F0.850.951.051.150.01% 0.06% 0.32% 1.78% 10.00% 56.23%
Probability of Fault
Net
wo
rk
Res
our
ce
Us
age
SSP-to-Tree/V.R.
SSP-to-Tree/R.R.
SSP-to-Tree/N.F.
R kn,1 is the first router in T kn encountered by the shortest path
from R k(n-1),0 to R 0,1 .
Denote SP f (X, Y) to be the shortest path from X to Y
conditioned on that a fault (Rf) has occurred. Recall that SP(X,
Y) represent the shortest path from X to Y in the normal case
(where there is no fault). Obviously, SP f (X, Y) varies depending
on the location of the fault. Nevertheless, SP f (X, Y) will be
different from SP(X, Y) if SP(X, Y) involves with the faulty
component.
Since the P kn is the shortest path from R kn,0 to R 0,1 when R f is
faulty, it is then denoted as SP f (R kn,0 , R 0,1 ). Similarly, SP f (R kn,0 ,
R k(n+1),1 ) denotes the portion of P kn from R kn,0 to R k(n+1),1 , and
(R k(n+1),1 , R 0,1 ) the portion of P kn from R k(n+1),1 to R 0,1 .
Using the above notations related path P kn and sub-tree T kn
h, we can derive some inequalities. Because
d(SP(R kn ,1, R f ) is the shortest path from R kn,1 to R f under normal
situation and
we have:
(R kn,1 , R k(n-1),0
Furthermore, because when R f is faulty
(R kn,0 , R 0,1
(R kn,0 , R k(n+1),1 (R k(n+1),1 , R 0,1
we have
(R kn,0 , R k(n+1),1 (R k(n+1),1 , R 0,1
(R kn,1 , R 0,1 )). (A-4)
Summing up both sides of (A-2) and (A-4), we have:
(R kn,0 , R k(n+1),1
(R k(n+1),1 , R 0,1 (R kn,1 , R k(n-1),0
(R kn,1 , R 0,1
Further, if we sum up (A-5) both sides for n from 1 to h,
we have
LHS < RHS (A-6)
where
R
R
d
R
R
d
R
R
d
R
R
d
R
R
d
R
R
d
f
f
kh
kn
f
kn
kn
kn
R
R
d
R
R
d
R
R
d
R
R
d
R
R
d
R
R
d
RHS
kn
kn
kn
kh
kh
kn
Since we assume that the links are dual directional, for any
routers X and Y, we have d(SP(X,
Because of this, (A-8) can be
reorganized as follows:
For the convenience otherwise Similarly,
R
R
d
R
R
d
R
R
d
R
R
d
R
R
d
R
R
d
RHS
kn
kn
kn
kh
kh
kn
f
Exchanging some items in (A-9), we
have
R
R
d
R
R
d
R
R
d
R
R
d
R
R
d
R
R
d
R
R
d
R
R
d
R
R
d
R
R
d
R
R
d
R
R
d
f
kh
kn
kn
kn
f
kn
kn
kh
kn
where LHS is given in (A-7). This contradicts to (A-6).
Acknowledgement
This work was partially sponsored by the Air Force Office of Scientific
Research, Air Force Material Command, USAF, under grant (F49620-96-
1-1076), by City University of Hong Kong under grant 7000765, and by
Cery HK with grant 9040352. The U.S. Government is authorized to
reproduce and distribute reprints for governmental purposes not
withstanding any copyright notation thereon. The views and conclusions
contained herein are those of the authors and should not be interpreted as
necessarily representing the official polices or endorsements, either
express or implied, of the Air Force Office of Scientific Research, the
U.S Government, Texas A&M University, City University of Hong Kong
or Cerg Hong Kong.
--R
Based Trees (CBT
ICMP Router Discovery Messages.
The PIM Architecture for Wide-Area Multicast Routing
Hard Real-Time Communication in Multiple-Access Networks
OSPF Version 2.
Multicast Extensions to OSPF.
Distance Vector Multicast Routing Protocol
Routing Algorithms for Anycast Messages
An Efficient Fault-Tolerant Multicast Routing Protocol with Core-Based Tree Techniques
--TR
--CTR
Zongming Fei , Mengkun Yang, A proactive tree recovery mechanism for resilient overlay multicast, IEEE/ACM Transactions on Networking (TON), v.15 n.1, p.173-186, February 2007
Weijia Jia , Gaochao Xu , Wei Zhao , Pui-On Au, Efficient Internet Multicast Routing Using Anycast Path Selection, Journal of Network and Systems Management, v.10 n.4, p.417-438, 2002 | multicast routing;core-based trees;fault tolerance |
323001 | Matching Hierarchical Structures Using Association Graphs. | AbstractIt is well-known that the problem of matching two relational structures can be posed as an equivalent problem of finding a maximal clique in a (derived) association graph. However, it is not clear how to apply this approach to computer vision problems where the graphs are hierarchically organized, i.e., are trees, since maximal cliques are not constrained to preserve the partial order. Here, we provide a solution to the problem of matching two trees by constructing the association graph using the graph-theoretic concept of connectivity. We prove that, in the new formulation, there is a one-to-one correspondence between maximal cliques and maximal subtree isomorphisms. This allows us to cast the tree matching problem as an indefinite quadratic program using the Motzkin-Straus theorem, and we use replicator dynamical systems developed in theoretical biology to solve it. Such continuous solutions to discrete problems are attractive because they can motivate analog and biological implementations. The framework is also extended to the matching of attributed trees by using weighted association graphs. We illustrate the power of the approach by matching articulated and deformed shapes described by shock trees. | where jCj denotes the cardinality of C.
Now, consider the following quadratic function
where A ??aij? is the adjacency matrix of G, i.e., the n n
symmetric matrix defined as
0; otherwise:
A point x 2 Sn is said to be a global maximizer of f in Sn if
f?x?f?x?, for all x 2 Sn. It is said to be a local maximizer
if there exists an >0 such that f?x?f?x? for all x 2 Sn
whose distance from x is less than and if f?x??f?x?
PELILLO ET AL.: MATCHING HIERARCHICAL STRUCTURES USING ASSOCIATION GRAPHS 1109
implies x ? x, then x is said to be a strict local maximizer.
Note that f?x?1 for all x 2 Sn.
The Motzkin-Straus theorem [40] establishes a remarkable
connection between global (local) maximizers of the
function f in Sn and maximum (maximal) cliques of G.
Specifically, it states that a subset of vertices C of a graph G
is a maximum clique if and only if its characteristic vector
xc is a global maximizer of f on Sn. A similar relationship
holds between (strict) local maximizers and maximal
cliques [19], [50]. This result has an intriguing computational
significance in that it allows us to shift from the
discrete to the continuous domain. Such a reformulation is
attractive for several reasons: It suggests how to exploit the
full arsenal of continuous optimization techniques, thereby
leading to the development of new algorithms, and may
also reveal unexpected theoretical properties. Additionally,
continuous optimization methods are often described in
terms of sets of differential equations and are, therefore,
potentially implementable in analog circuitry. The Motzkin-
Straus theorem has served as the basis of several clique-
finding procedures [10], [18], [45], [46] and has also been
used to determine theoretical bounds on the cardinality of
the maximum clique [45], [64].
One drawback associated with the original Motzkin-
Straus formulation relates to the existence of spurious
solutions, i.e., maximizers of f which are not in the form of
characteristic vectors. This was observed empirically by
Pardalos and Phillips [45] and more recently formalized by
Pelillo and Jagota [50]. In principle, spurious solutions
represent a problem since, while providing information
about the cardinality of the maximum clique, they do not
allow us to easily extract its vertices. Fortunately, there is a
solution to this problem which has recently been introduced
and studied by Bomze [7]. Consider the following regularized
version of f:f^?x??x0Ax ? x0x; ?2?which is obtained from (1) by substituting the adjacency
matrix A of G withA^ ? A ? In;where In is the n n identity matrix. The following is the
spurious-free counterpart of the original Motzkin-Straus
theorem (see [7] for a proof).
Theorem 2. Let C be a subset of vertices of a graph G, and let xc
be its characteristic vector. Then the following statements hold:
1. C is a maximum clique of G if and only if xc is a global
maximizer of the function f^ in Sn. In this case,
2. C is a maximal clique of G if and only if xc is a local
maximizer of f^ in Sn.
3. All local (and, hence, global) maximizers of f^ in Sn are
strict.
Unlike the original Motzkin-Straus formulation, the
previous result guarantees that all maximizers of f^ on Sn
are strict, and are characteristic vectors of maximal/
maximum cliques in the graph. In a formal sense, therefore,
a one-to-one correspondence exists between maximal
cliques and local maximizers of f^ in Sn on the one hand
and maximum cliques and global maximizers on the other
hand.
We now turn our attention to a class of dynamical systems
that we use for solving our quadratic optimization problem.
Let W be a nonnegative real-valued n n matrix and
consider the following dynamical system:
where a dot signifies derivative w.r.t. time t, and its
discrete-time counterpart
It is readily seen that the simplex Sn is invariant under these
dynamics, which means that every trajectory starting in Sn
will remain in Sn for all future times. Moreover, it turns out
that their stationary points, i.e., the points satisfying x_i?t??0
for (3) or xi?t ? 1??xi?t? for (4), coincide and are the
solutions of the equations:
A stationary point x is said to be asymptotically stable if every
solution to (3) or (4) which starts close enough to x
converges to x as t !1.
Both (3) and (4) are called replicator equations in
theoretical biology since they are used to model evolution
over time of relative frequencies of interacting, self-replicating
entities [23]. The discrete-time dynamical equations
turn out to be a special case of a general class of
dynamical systems introduced by Baum and Eagon [5] in
the context of the theory of Markov chains. They also
represent an instance of the original Rosenfeld-Hummel-
Zucker relaxation labeling algorithm [56], whose dynamical
properties have recently been clarified [47] (specifically, it
corresponds to the 1-object, n-label case).
We are now interested in the dynamical properties of
replicator equations; it is these properties that will allow us
to solve our original tree matching problem.
Theorem 3. If W ? W0, then the function x?t?0Wx?t? is strictly
increasing with increasing t along any nonstationary trajectory
x?t? under both continuous-time (3) and discrete-time (4)
replicator dynamics. Furthermore, any such trajectory converges
to a stationary point. Finally, a vector x 2 Sn is
asymptotically stable under (3) and (4) if and only if x is a
strict local maximizer of x0Wx on Sn.
The previous result is known in mathematical biology as
the fundamental theorem of natural selection [13], [23], [63]
and, in its original form, traces back to Fisher [15]. As far as
the discrete-time model is concerned, it can be regarded as a
straightforward implication of the more general Baum-
Eagon theorem [5]. The fact that all trajectories of the
replicator dynamics converge to a stationary point has been
proven more recently [33], [35].
Fig. 4. A coloring of shocks into four types [28]. A 1-shock derives from a protrusion and traces out a curve segment of adjacent 1-shocks, along
which the radius function varies monotonically. A 2-shock arises at a neck, where the radius function attains a strict local minimum, and is
immediately followed by two 1-shocks flowing away from it in opposite directions. 3-shocks correspond to an annihilation into a curve segment due to
a bend, along which the radius function is constant, and a 4-shock is an annihilation into a point or a seed, where the radius function attains a strict
local maximum. The loci of these shocks gives Blum's medial axis.
In light of their dynamical properties, replicator equations
naturally suggest themselves as a simple heuristic for
solving the maximal subtree isomorphism problem. Let
be two rooted trees and let A
denote the N-node adjacency matrix of the corresponding
TAG. By lettingW ? A ? IN;where IN is the N N identity matrix, we know that the
replicator dynamical systems (3) and (4), starting from an
arbitrary initial state, will iteratively maximize the function
f^ defined in (2) over SN and will eventually converge with
probability 1 to a strict local maximizer which, by virtue of
Theorem 2, will then correspond to the characteristic vector
of a maximal clique in the association graph. As stated in
Theorem 1, this will in turn induce a maximal subtree
isomorphism between
Clearly, in theory, there is no guarantee that the
converged solution will be a global maximizer of f^ and,
therefore, that it will induce a maximum isomorphism
between the two original trees. Previous experimental work
done on the maximum clique problem [10], [46], and also
the results presented in this paper, however, suggest that
the basins of attraction of optimal or near-optimal solutions
are quite large and, very frequently, the algorithm converges
to one of them, despite its inherent inability to
escape from local optima.
Since the process cannot leave the boundary of SN,itis
customary to start out the relaxation process from some
interior point, a common choice being the barycenter of SN,
i.e., the vector ?1 This prevents the search from
being initially biased in favor of any particular solution.
5AN EXAMPLE:MATCHING SHOCK TREES
We now illustrate our framework for matching hierarchical
structures with numerical examples of shape matching.
Because of the subtleties associated with generating random
trees of relevance to applications in computer vision and
pattern recognition, we use a class of example trees derived
from a real system. Our representation for shape is based on
an abstraction of the shocks (or singularities) of a curve
evolution process, acting on a simple closed curve in the
plane, into a shock tree. We begin by providing some
background on the representation (for details see [28], [62])
and then present experimental results on matching shock
trees. In Section 6, we extend the framework to incorporate
attributes associated with shock tree nodes.
5.1 The Shock Tree
In [27], [28], the following evolution equation was proposed
for visual shape analysis:
Here, C?p;t? is the vector of curve coordinates, N?p;t? is the
inward normal, p is the curve parameter, and t is the
evolutionary time of the deformation. The constant 0
controls the regularizing effects of curvature . When is
large, the equation becomes a geometric heat equation;
When ? 0, the equation is hyperbolic and shocks [30], or
entropy-satisfying singularities, can form. In the latter case
the locus of points through which the shocks migrate is
related to Blum's grassfire transformation [12], [28],
although significantly more information is available via a
?coloring? of these positions. Four types can arise, according
to the local variation of the radius function along the
medial axis (Fig. 4). Intuitively, the radius function varies
monotonically at a type 1, reaches a strict local minimum at
a type 2, is constant at a type 3, and reaches a strict local
maximum at a type 4. The classification of shock positions
according to their colors and an enumeration of the possible
local neighborhoods around each shock type is at the heart
of the representation.
Shocks of the same type that form a connected
component are grouped together to comprise the nodes of
a shock graph, with the 1-shock groups separated at branch-points
of the skeleton. Directed edges in the graph are
PELILLO ET AL.: MATCHING HIERARCHICAL STRUCTURES USING ASSOCIATION GRAPHS 1111
Fig. 5. Two illustrative examples of the shocks obtained from curve evolution (from [62]). Left: The notation associated with the locus of shock points
is of the form shock_type-identifier. Right: The corresponding trees have the shock_type on each node, with the identifier adjacent. The last shocks
to form during the curve evolution process appear under a root node labeled #.
placed between shock groups that touch one another such
that each parent node contains no shocks that formed prior
to any shocks in a child node. This corresponds to a
?reversal? in time of the curve evolution process to obtain a
hierarchy of connected components. The graph is rooted at
a unique vertex #, the children of which are the last shock
groups to form, e.g., the palms of the hand silhouettes in
Fig. 5. (The letters ?a? and ?b? denote different sides of the
same shock group). A key property of the shock graph is
that its topological structure is highly constrained because
the events that govern the birth, combination, and death of
shock groups are completely characterized by a shock
grammar, with a small number of rewrite rules [62]. In
particular, each shock graph can be reduced to a unique
rooted shock tree.
5.2 Experimental Results
We now illustrate the power of the hierarchical structure
matching algorithm on shock trees. Whereas each node has
geometric attributes, related to properties of the shocks in
each group, for now we shall consider only the shock tree
topologies. (We shall consider the geometry shortly.) We
selected 25 silhouettes representing eight different object
classes (Table 1, first column); the tool shapes were taken
from the Rutgers Tools database. Each shape was then
matched against all entries in the database. Fig. 6 shows the
maximal subtree isomorphisms found by the algorithm for
three examples. The top eight matches for each query shape,
along with the associated scores, are shown in Table 1. The
scores indicate the average of n=n1 and n=n2, where n is the
size of the maximal clique found, and n1 and are the
number of nodes in each tree. We observed that, in all our
1112 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 21, NO. 11, NOVEMBER 1999
A Tabulation of the Top Eight Topological Matches for Each Query
The scores indicate the average of the fraction of nodes matched in each of the two trees (see text). Note that only the topology of the shock trees
was used; the addition of geometric information permits finer discrimination (compare with Table 2).
experiments, the maximal cliques found were also maximum
cliques. This is due to the property that global
maximizers of the objective function typically have large
basins of attraction, as also shown experimentally in [46],
[49], [10]. The matching algorithm generally takes only two
to three seconds to converge on a Sparc 10.
Note that, despite the fact that metric/label information
associated with nodes in the shock trees was discounted
altogether, all exemplars in the same class as the query
shape are typically within the top five matches, illustrating
the potential of a topological matching process for indexing
into a database of shapes. Nevertheless, there exist a few
matches which appear to be counterintuitive, e.g., matches
[(Row 1, Column 3); (Row 2, Column 3); (Row 13, Column
(Row 14, Column 4); (Row 21, Column 2) and (Row 21,
Column 3)]. These correspond to shapes with similar shock-
tree topologies (hierarchies of parts), but drastically
different shock geometries (part shapes). In the following
section, we extend our framework to incorporate the latter
geometric information contained in each shock sequence
(the location, time of formation, speed, and direction of each
shock) as attributes on the nodes. We show that this leads to
better discrimination between shapes than that provided by
shock tree topologies alone.
PELILLO ET AL.: MATCHING HIERARCHICAL STRUCTURES USING ASSOCIATION GRAPHS 1113
Fig. 6. Maximal subtree isomorphisms found for three illustrative
examples. The shock-based descriptions of the hand silhouettes are
shown in Fig. 5; the shock-trees for the other silhouettes were computed
in a similar fashion.
6ATTRIBUTED TREE MATCHING AND WEIGHTED
TAGS
In many computer vision and pattern recognition applica-
tions, the trees being matched have nodes with an
associated vector of symbolic and/or numeric attributes.
In this section, we show how the proposed framework can
naturally be extended for solving attributed tree matching
problems.
6.1 Attributed Tree Matching as Weighted Clique
Search
Formally, an attributed tree is a triple T ??V;E;?, where
?V;E? is the ?underlying? rooted tree and is a function
which assigns an attribute vector ?u? to each node u
is clear that, in matching two attributed trees, our objective
is to find an isomorphism which pairs nodes having
?similar? attributes. To this end, let be any similarity
measure on the attribute space, i.e., any (symmetric)
function which assigns a positive number to any pair of
attribute vectors. If : H1 ! H2 is a subtree isomorphism
between two attributed trees
??V2;E2;2?, the overall similarity between the induced
subtrees T1?H1? and T2?H2? can be defined as follows:
The isomorphism is called a maximal similarity subtree
isomorphism if there is no other subtree isomorphism
H20 such that H1 is a strict subset of H10 and
S?? <S?0?. It is called a maximum similarity subtree
isomorphism if S?? is largest among all subtree isomorphisms
between
The weighted TAG of two attributed trees T1 and T2 is the
weighted graph G ??V;E;!?, where V and E are defined
as in Definition 2 and ! is a function which assigns a
positive weight to each node ?u; v?2V ? V1 V2 as follows:
Given a subset of nodes C of V , the total weight assigned to
C is simply the sum of all the weights associated with its
nodes. A maximal weight clique in G is one which is not
contained in any other clique having larger total weight,
while a maximum weight clique is a clique having largest
total weight. The maximum weight clique problem is to find
a maximum weight clique of G [9]. Note that the
unweighted version of the maximum clique problem arises
as a special case when all the nodes are assigned a constant
weight.
The following result, which is the weighted counterpart
of Theorem 1, establishes a one-to-one correspondence
between the attributed tree matching problem and the
maximum weight clique problem (the proof is essentially
identical to that of Theorem 1).
Theorem 4. Any maximal (maximum) similarity subtree
isomorphism between two attributed trees induces a maximal
weight clique in the corresponding weighted TAG,
and vice versa.
6.2 Matching Attributed Trees
Recently, the Motzkin-Straus formulation of the maximum
clique problem has been extended to the weighted case [19].
Let G ??V;E;!? be an arbitrary weighted graph of order n.
The (weighted) characteristic vector of any subset of nodes
denoted xc, is defined as follows:
c !?ui?=?C?; if ui 2 C
where ?C?? uj2C !?uj? is the total weight on C.
Now, consider the following class of n n symmetric
matrices:
1114 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 21, NO. 11, NOVEMBER 1999
and the quadratic function
2M?G?. The following theorem, which is the
weighted counterpart of Theorem 2, expands on a recent
result by Gibbons et al. [19], which in turn generalizes the
Motzkin-Straus theorem to the weighted case (see [8], [11]
for a proof).
Theorem 5. Let G ??V;E;!? be an arbitrary weighted graph
and let B 2M?G?. Then, the following hold:
1. A vector x 2 Sn is a local minimizer of g on Sn if
and only if x ? xc, where C is a maximal weight
clique of G.
2. A vector x 2 Sn is a global minimizer of g on Sn if
and only if x ? xc, where C is a maximum weight
clique of G.
3. All local (and hence global) minimizers of g on Sn are
strict.
In contrast to the Gibbons et al. formulation [19], which is
plagued by the presence of spurious solutions, as is the
original Motzkin-Straus problem [11], the previous result
guarantees that all minimizers of g on Sn are strict and are
characteristic vectors of maximal/maximum weight cliques
in the graph. Note that the class M?G? is isomorphic to the
positive orthant in n ?jEj dimensions, where jEj is thenumber of edges in G. This class is a polyhedral pointed
cone with its apex given by the following matrix, which is
the one used in the experiments described below:< 2!?1ui? if i ? j;
Having formulated the maximum weight clique problem
as a quadratic program over the standard simplex,
we can use the replicator equations to approximately
solve it. However, note that replicator equations are
maximization procedures, while ours is a minimization
problem. It is straightforward to see that the problem of
minimizing the quadratic form x0Bx on the simplex is
equivalent to that of maximizing x0?ee0 ? B?x, where
is an arbitrary constant.1
be two attributed trees, G ??V;E;!?
be the corresponding weighted TAG, and define
where B ??bij? is any matrix in the class M?G? and
maxbij. From the fundamental theorem of natural
1. Note that the conversion of the minimization problem to a
maximization problem is driven by a purely algorithmic (as opposed to
issue. One could minimize x0Bx on the simplex using
alternative optimization techniques [29], [53]. However, since we are
matching trees and not graphs, we expect the difference in performance to
be marginal.
selection (Theorem 3), we know that the replicator
dynamical systems (3) and (4) will find local maximizers
of the function x0Wx (and, hence, minimizers of x0Bx) over
the standard simplex and, by virtue of Theorem 5, these will
correspond to the characteristic vectors of maximal weight
cliques in the weighted TAG. As stated in Theorem 4, these
will in turn induce maximal similarity subtree isomorphisms
between As in the unweighted case, there is
no theoretical guarantee that the solutions found will be the
globally optimal ones, but the experiments reported in [11]
on the maximum weight clique problem suggest that, here
too, the attraction basins of global maximizers are quite
large. This observation is also confirmed by the experiments
reported below, each of which typically took 5 to 10 seconds
to run on a Sparc 10.
6.3 Experimental Results
We now provide examples of weighted shock tree match-
ing, using the geometric attributes associated with shock
tree nodes. The vector of attributes assigned to each node
of the attributed shock tree T ??V;E;? is given by
is the
number of shocks in the group, and xi;yi;ri;vi;i are,
respectively, the x coordinate, the y coordinate, the radius
(or time of formation), the speed, and the direction of each
shock i in the sequence, obtained as outputs of the shock
detection process [60], [61]. In order to apply our frame-
work, we must define a similarity measure between the
attributes of two nodes u and v.
The similarity measure we use is a linear combination of
four terms, incorporating the differences in lengths, radii,
velocities, and curvature of the two shock sequences,
respectively. Each term is normalized to provide a unitless
quantity so that these different geometric properties can be
combined. Let u contain m shocks and v contain n shocks
and, without loss of generality, assume that m n. The
Euclidean length of each sequence of shocks is given by:
Xn?1 q??????????????????????????????????????????????????????
!di:ne be the many-to-one mapping of each
index i 2f1; :::; mg to an index j 2f1; :::; ng. The similarity
measure between the two attribute vectors used in our
experiments is defined as:
PELILLO ET AL.: MATCHING HIERARCHICAL STRUCTURES USING ASSOCIATION GRAPHS 1115
different problems and the maximal
clique formulation invokes much weaker constraints. In [4],
(1 Xm r?i??r??i?? 2)12 we present an extension of the maximal clique formulation
? r to the case of many-to-one correspondences, which is of
i?1 particular interest in computer vision and pattern recogni-
tion applications where the trees being matched are noisy,
? v and vertices are deleted or added.
(2)1
i?1 We have developed a formal approach for matching
hierarchical structures by constructing an association graph
whose maximal cliques are in one-to-one correspondence
where l;r;v; are nonnegative constants summing to 1 with maximal subtree isomorphisms. The framework is
and is the change in orientation at each shock. The
general and can be applied in a variety of computer vision
measure provides a number between 0 and 1, which
and pattern recognition domains: We have demonstrated its
represents the overall similarity between the geometric
potential for shape matching. The formulation allows us to
attributes of the two nodes being compared. The measure is
cast the tree-matching problem as an indefinite quadratic
designed to be invariant under rotations and translations of
program owing to the Motzkin-Straus theorem. The solu-
two shapes and to satisfy the requirements of the weight
tion is found by using a dynamical system, which makes it
function discussed in Section 6.1.
amenable to hardware implementation and offers the
We repeated the earlier experiments, but now with
advantage of biological plausibility. In particular, these
weights w?u; v????u?;?v?? placed on each node ?u;v? of relaxation equations are related to putative neuronal
a weighted TAG. The weights were defined by (9), with implementations [38], [39]. We have also extended the
l;r;v; set to 0.25, 0.4, 0.2, and 0.15, respectively. We framework to the problem of matching hierarchical struc-
verified that the overall results were not sensitive to the tures with attributes. The attributes result in weights being
precise choice of these parameters or to slight variations in placed on the nodes of the association graph and a
the similarity measure such as the use of a different norm. conversion of the maximum clique problem to a maximum
We ranked the results using a score given by the quantity weight clique problem. An extension of the proposed
is the size of framework to problems involving many-to-one correspondences
is presented in [4].
the maximal weight clique found, W its weight, M1 and M2
Characterizing the complexity of our approach appears
the total mass associated with the nodes of each tree, and
to be difficult since it involves the simulation of a dynamical
m?u?;m?v? the masses of nodes u and v, respectively. The
system. However, we have observed experimentally that
score represents the weight of the maximal clique scaled by
the basins of attraction of the global maximizers are large,
the average of the total (relative) mass of nodes in each tree
both in the unweighted and weighted cases, and that the
that participates in the match. As before, the top eight system converges quickly when applied to shock-tree
matches are shown for each query shape, in Table 2. It is matching. Conversely, whereas polynomial time algorithms
evident that, for almost all queries, performance improves exist for the maximum common subtree problem [37], [54],
with the incorporation of geometric attributes, with better [17], to our knowledge no such algorithm exists for the case
overall discrimination between the shapes (compare with of weighted tree matching. This provides further justifica-
Table
1). Nonetheless, there is an inherent trade-off between tion for our framework.
geometry and topology, e.g., observe that the short fat
screwdriver in row 15 scores better with the fatter hand APPENDIX
shapes than with the thin, elongated screwdrivers.
We note that the qualitative results, specifically, the
partial ordering of matches in each row of Table 2, compare Before presenting the proof of Lemma 1, we need some
favorably with those obtained using an alternate approach preliminary remarks and definitions. First, note that path-
described in [62]. The latter approach has been applied strings cannot be arbitrary strings of ?1s and ?1s. Since
successfully to the same database of shapes used here and is trees do not have cycles, in fact, once we go down one level
a sequential (level by level) approach with backtracking. At along a path (i.e., we make a ?+1? move), we cannot return
each step, an eigenvalue labeling of the adjacency matrix of to the parent. This is formally stated by saying that if
the subtrees rooted at the two nodes being matched is used. str?u; v??s1s2 .sn is the path-string between any two
In other words, the cost of matching two nodes incorporates nodes u and v, then si ??1 implies sj ??1 for all j i.
not only their geometries, but a measure of the similarity Now, we define the path-pair of any two nodes u and v in
between their subtrees, which is global. Furthermore, the a tree as pair?u; v???n; p?, where n is the number of
algorithm tolerates noise by allowing for jumps between negative components in str?u; v? and p is the number of
levels. Hence, strictly speaking, it solves an approximation to positive components in str?u; v?. It is clear from the previous
the subtree isomorphism problem. For these reasons, a one- observation that path-pairs and path-strings are equivalent
to-one comparison between the two algorithms is not concepts. In fact, we have: str?u;v??str?w; z? if and only if
A Tabulation of the Top Eight Attributed Toplogical Matches for Each Query
The scores indicate the weight of the maximal clique multiplied by the average of the total relative mass of nodes in each tree matched (see text).
The addition of geometric information permits finer discrimination between the shapes (compare with Table 1).
note that if a node w is on
the path between any two nodes u and v in a rooted tree,
then str?u; v? can be obtained by concatenating str?u; w? and
str?w; v?. This implies that
where ??? denotes the usual sum between vectors. In a
sense, then, path-pairs allow us to do ?arithmetic? on path-
strings, a fact which will be technically useful in the sequel.
(The full algebraic structure will not be needed here.)
We are now in a position to prove Lemma 1. For
convenience, we repeat its statement below.
Lemma 1. Let u1;v1;w1;z1 2 V1 and u2;v2;w2;z2 2 V2 be
distinct nodes of rooted trees
??V2;E2?, and suppose that the following conditions hold:
1. w1 is on the u1v1-path, and w2 is on the u2v2-path
2. str?u1;w1??str?u2;w2?
3. str?w1;v1??str?w2;v2?
4. str?u1;z1??str?u2;z2?
5. str?v1;z1??str?v2;z2?
Then, str?w1;z1??str?w2;z2?.
PELILLO ET AL.: MATCHING HIERARCHICAL STRUCTURES USING ASSOCIATION GRAPHS 1117
Fig. 7. An illustration of the cases arising in the proof of Lemma 1.
Proof. First, note that, from conditions 1-3, we also have
6. str?u1;v1??str?u2;v2?.
We shall prove that pair?w1;z1??pair?w2;z2?, which is,
of course, equivalent to the thesis of the lemma. The
proof consists of enumerating all possible cases and
exploiting the previous observation that, when w is on a
uv-path, then pair?u; v??pair?u; w??pair?w; v?. Before
doing so, however, we need an auxiliary notation. Let u
be a node of a rooted tree T ??V;E?. The set of nodes
belonging to the subtree rooted at u will be denoted by
?u?. This can be formally defined as follows:
Note that
We shall enumerate all the possible ways in which u1,
v1, w1, and z1 can relate to one another in T1, but it is clear
from conditions 1-6 that each such configuration induces
a perfectly symmetric situation in T2 and vice versa.
Therefore, from now on, we shall use the index i ? to
simplify the discussion. Technically, this means that we
are assuming something about one tree and, because of
our hypotheses, the same situation arises in the other.
The enumeration of all possible cases starts from the
observation that, given two different subtrees of a tree,
either one is a strict subset of the other or they are
disjoint (otherwise, in fact, there will be cycles in the
graph). Therefore, considering the subtrees rooted at
nodes ui and vi (i ? only the following cases can
arise:
1. V ?vi?V ?ui?
2. V ?ui?V ?vi?
3. V ?ui?\V ?vi??;
The first two are symmetric and, therefore, we shall
only consider Case 1. In this case, since wi is on the uivi-
path, we have V ?vi?V 2. Clearly, only four
subcases are possible (cf. Fig. 7a), that is:
1.1
1.2
1.3
2.
Let us consider Case 1.1. In this case, the vis are on the
wizi-paths and, therefore:
In Case 1.2, we have that the wis are on the uizi-paths
and, hence, pair?ui;zi??pair?ui;wi??pair?wi;zi?,
2. Therefore, we have:
Case 1.3 is similar to Case 1.2. In this case, we have that
wi is on the zivi-path and, therefore
Finally, Case 1.4 is similar to Case 1.1, since ui is on the
path joining wi and zi.
let us consider Case 3, i.e., V ?ui?\V ?vi??;
Here, the situation is slightly more compli-
cated. Since wi is on the uivi-path and this has the form
we have the following three
subcases:
3.1
3.2 V ?vi?V ?wi? and V ?wi?\V ?ui??;
3.3 V ?vi?V ?wi? and V ?ui?V ?wi?
Cases 3.1 and 3.2 are symmetric, so we shall only
consider Cases 3.1 and 3.3.
Let us start with Case 3.1. Four possible subcases arise
(Fig.
3.1.1
3.1.2
3.1.3
zi2=V ?wi? and zi 2 V ?vi?3.1.4
All these cases are similar to the previous ones and the
corresponding proofs are therefore analogous.
Finally, let us consider Case 3.3. Again, four possible
subcases arise (Fig. 7c):
3.3.1.
3.3.2.
3.3.3.
3.3.4.
and 3.3.4 are similar to those seen
before and, hence, we omit the corresponding proofs. As
to Case 3.3.3, we note that wi must necessarily be either
on the path joining ui and zi or on that joining vi and zi
(or on both), otherwise the graph would have a cycle. So,
the proof in this case is analogous to the previous ones,
and this concludes the proof of the lemma. tu
ACKNOWLEDGMENTS
This work was done while Marcello Pelillo was visiting the
Center for Computational Vision and Control at Yale
University. It was supported by Consiglio Nazionale delle
Ricerche (Italy), NSERC, FCAR, NSF, and AFOSR. We
thank the reviewers for their helpful comments and Sven
Dickinson for the use of shapes from the Rutgers Tools
database.
--R
Computer Vision.
An Introduction to Population Genetics Theory.
The Genetical Theory of Natural Selection.
Computer and Intractability: A Guide to
Graph Theory.
The Theory of Evolution and Dynamical Systems.
--TR
--CTR
Karsten Hartelius , Jens Michael Carstensen, Bayesian Grid Matching, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.2, p.162-173, February
Brijnesh J. Jain , Fritz Wysotzki, Automorphism Partitioning with Neural Networks, Neural Processing Letters, v.17 n.2, p.205-215, April
Philip N. Klein , Thomas B. Sebastian , Benjamin B. Kimia, Shape matching using edit-distance: an implementation, Proceedings of the twelfth annual ACM-SIAM symposium on Discrete algorithms, p.781-790, January 07-09, 2001, Washington, D.C., United States
Gunilla Borgefors , Giuliana Ramella , Gabriella Sanniti di Baja, Hierarchical Decomposition of Multiscale Skeletons, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.11, p.1296-1312, November 2001
Andrea Torsello , Edwin R. Hancock, Computing approximate tree edit distance using relaxation labeling, Pattern Recognition Letters, v.24 n.8, p.1089-1097, May
Graphical models for graph matching: approximate models and optimal algorithms, Pattern Recognition Letters, v.26 n.3, p.339-346, February 2005
Marco Carcassoni , Edwin R. Hancock, Correspondence Matching with Modal Clusters, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.12, p.1609-1615, December
Horst Bunke , Simon Gnter, Weighted mean of a pair of graphs, Computing, v.67 n.3, p.209-224, November 2001
Alessio Massaro , Marcello Pelillo, Matching graphs by pivoting, Pattern Recognition Letters, v.24 n.8, p.1099-1106, May
Evgeny B. Krissinel , Kim Henrick, Common subgraph isomorphism detection by backtracking search, SoftwarePractice & Experience, v.34 n.6, p.591-607, May 2004
Simon Gnter , Horst Bunke, Self-organizing map for clustering in the graph domain, Pattern Recognition Letters, v.23 n.4, p.405-417, February 2002
Chris Ding , Xiaofeng He , Hanchuan Peng, Finding cliques in protein interaction networks via transitive closure of a weighted graph, Proceedings of the 5th international workshop on Bioinformatics, August 21-21, 2005, Chicago, Illinois
Dmitriy Bespalov , Ali Shokoufandeh , William C. Regli , Wei Sun, Scale-space representation of 3D models and topological matching, Proceedings of the eighth ACM symposium on Solid modeling and applications, June 16-20, 2003, Seattle, Washington, USA
Andrea Torsello , Dzena Hidovic-Rowe , Marcello Pelillo, Polynomial-Time Metrics for Attributed Trees, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.7, p.1087-1099, July 2005
S. Tabbone , L. Wendling , J.-P. Salmon, A new shape descriptor defined on the radon transform, Computer Vision and Image Understanding, v.102 n.1, p.42-51, April 2006
Marcello Pelillo, Matching Free Trees, Maximal Cliques, and Monotone Game Dynamics, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.11, p.1535-1541, November 2002
Duck Hoon Kim , Il Dong Yun , Sang Uk Lee, A comparative study on attributed relational gra matching algorithms for perceptual 3-D shape descriptor in MPEG-7, Proceedings of the 12th annual ACM international conference on Multimedia, October 10-16, 2004, New York, NY, USA
Jiang , Andreas Munger , Horst Bunke, On Median Graphs: Properties, Algorithms, and Applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.10, p.1144-1151, October 2001
Josep Llados , Enric Mart , Juan Jose Villanueva, Symbol Recognition by Error-Tolerant Subgraph Matching between Region Adjacency Graphs, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.10, p.1137-1143, October 2001
Davi Geiger , Tyng-Luh Liu , Robert V. Kohn, Representation and Self-Similarity of Shapes, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.1, p.86-99, January
Michel Neuhaus , Horst Bunke, Edit distance-based kernel functions for structural pattern classification, Pattern Recognition, v.39 n.10, p.1852-1863, October, 2006
Peter J. Giblin , Benjamin B. Kimia, On the Intrinsic Reconstruction of Shape from Its Symmetries, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.7, p.895-911, July
Andrea Torsello , Edwin R. Hancock, A skeletal measure of 2D shape similarity, Computer Vision and Image Understanding, v.95 n.1, p.1-29, July 2004
M. Fatih Demirci , Ali Shokoufandeh , Yakov Keselman , Lars Bretzner , Sven Dickinson, Object Recognition as Many-to-Many Feature Matching, International Journal of Computer Vision, v.69 n.2, p.203-222, August 2006
Luca Lombardi , Alfredo Petrosino, Distributed recursive learning for shape recognition through multiscale trees, Image and Vision Computing, v.25 n.2, p.240-247, February, 2007
Thomas B. Sebastian , Philip N. Klein , Benjamin B. Kimia, Recognition of Shapes by Editing Their Shock Graphs, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.5, p.550-571, May 2004
Bin Luo , Edwin R. Hancock, Structural Graph Matching Using the EM Algorithm and Singular Value Decomposition, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.10, p.1120-1136, October 2001
Andrea Torsello , Antonio Robles-Kelly , Edwin R. Hancock, Discovering Shape Classes using Tree Edit-Distance and Pairwise Clustering, International Journal of Computer Vision, v.72 n.3, p.259-285, May 2007
Thomas B. Sebastian , Benjamin B. Kimia, Curves vs. skeletons in object recognition, Signal Processing, v.85 n.2, p.247-263, February 2005
Arjan Kuijper , Ole Fogh Olsen , Peter Giblin , Mads Nielsen, Alternative 2D Shape Representations using the Symmetry Set, Journal of Mathematical Imaging and Vision, v.26 n.1-2, p.127-147, November 2006
Ali Shokoufandeh , Diego Macrini , Sven Dickinson , Kaleem Siddiqi , Steven W. Zucker, Indexing Hierarchical Structures Using Graph Spectra, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.7, p.1125-1140, July 2005
Ali Shokoufandeh , Lars Bretzner , Diego Macrini , M. Fatih Demirci , Clas Jnsson , Sven Dickinson, The representation and matching of categorical shape, Computer Vision and Image Understanding, v.103 n.2, p.139-154, August 2006
Stefano Berretti , Alberto Del Bimbo , Enrico Vicario, Efficient Matching and Indexing of Graph Models in Content-Based Retrieval, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.10, p.1089-1105, October 2001
Yakov Keselman , Sven Dickinson, Generic Model Abstraction from Examples, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.7, p.1141-1156, July 2005
Brijnesh J. Jain , Fritz Wysotzki, Central Clustering of Attributed Graphs, Machine Learning, v.56 n.1-3, p.169-207
Marcello Pelillo, Replicator Equations, Maximal Cliques, and Graph Isomorphism, Neural Computation, v.11 n.8, p.1933-1955, November 1999
Michal A. van Wyk , Tariq S. Durrani , Barend J. van Wyk, A RKHS Interpolator-Based Graph Matching Algorithm, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.7, p.988-995, July 2002
Marcello Pelillo , Andrea Torsello, Payoff-Monotonic Game Dynamics and the Maximum Clique Problem, Neural Computation, v.18 n.5, p.1215-1258, May 2006 | shape recognition;replicator dynamical systems;maximal subtree isomorphisms;shock trees;association graphs;maximal cliques |
323223 | Analytical Modeling of Set-Associative Cache Behavior. | AbstractCache behavior is complex and inherently unstable, yet it is a critical factor affecting program performance. A method of evaluating cache performance is required, both to give quantitative predictions of miss-ratio and information to guide optimization of cache use. Traditional cache simulation gives accurate predictions of miss-ratio, but little to direct optimization. Also, the simulation time is usually far greater than the program execution time. Several analytical models have been developed, but concentrate mainly on direct-mapped caches, often for specific types of algorithm, or to give qualitative predictions. In this work, novel analytical models of cache phenomena are presented, applicable to numerical codes consisting mostly of array operations in looping constructs. Set-associative caches are considered, through an extensive hierarchy of cache reuse and interference effects, including numerous forms of temporal and spatial locality. Models of each effect are given which, when combined, predict the overall miss-ratio. An advantage is that the models also indicate sources of cache interference. The accuracy of the models is validated through example program fragments. The predicted miss-ratios are compared with simulations and shown typically to be within 15 percent. The evaluation time of the models is shown to be independent of the problem size, generally several orders of magnitude faster than simulation. | Introduction
Cache performance is one of the most critical factors affecting the performance of
software, and with memory latency continuing to increase in respect to processor
clock speeds utilizing the cache to its full potential is more and more essential.
Yet cache behavior is extremely difficult to analyze, reflecting its unstable nature
in which small program modifications can lead to disproportionate changes in
cache miss ratio [2, 12]. A method of evaluating cache performance is required,
both to give quantitative predictions of miss ratio, and information to guide
optimization of cache use.
john@dcs.warwick.ac.uk
Traditionally cache performance evaluation has mostly used simulation, emulating
the cache effect of every memory access through software. Although the
results will be accurate, the time needed to obtain them is prohibitive, typically
many times greater than the total execution time of the program being
simulated [13]. Another possibility is to measure the number of cache misses in-
curred, using the performance monitoring features of modern microprocessors.
This can also give accurate results, and in reasonable time, but introduces the
restriction that only cache architectures for which actual hardware is available
can be evaluated.
To try and overcome these problems several analytical models of cache behavior
have been developed. One such technique is to extract parameters from
an address trace and combine them with parameters defining the cache to derive
a model of cache behavior [1]. This method is able to accurately predict the
general trends in behavior, but lacks the fine detail that is needed to model the
instability noted above. Analytical models combined with heuristics have also
been used to guide optimizing compilers in their choice of source code transformations
[14, 4, 10]. The models developed however are usually unsuitable
for more general performance evaluation, since they often aim for qualitative,
rather than quantitative, predictions. Another area in which analytical models
have been employed has been in studying the cache performance of particular
types of algorithm, especially in the analysis of blocked algorithms [9, 3, 5].
Attempts have been made at creating general purpose models that are both
accurate and expressive, with some success [12, 6, 7], but in all cases limited
to describing direct-mapped caches. In this work we present novel analytical
techniques for predicting the cache performance of a large class of loop nest-
ings, for the general case of set-associative caches (i.e. with direct-mapped as
the case with associativity one). All forms of cache reuse and interference are
considered leading to accurate, yet rapidly evaluated, models. These benefits
and others are demonstrated through the examination of several example code
fragments. The work has a wide range of possible applications, from aiding
software development to on the fly performance prediction and management.
We also plan to integrate the model with an existing system for analyzing the
performance of parallel systems [11].
The paper is organized as follows: the next section outlines the problem
being addressed, and the classification of the cache phenomena being modeled.
Section 3 describes in detail how the effect of array references on the cache is
represented, and how this representation can be efficiently computed. In Sections
4, 5, and 6, the different types of cache reuse are considered, in terms of
the representation developed in Section 3. Finally, Section 7 presents experimental
data showing how the models compare with simulation, followed by a
discussion of these results and our conclusions in Sections 8 and 9.
ENDDO
ENDDO
ENDDO
Figure
1: General form of considered loop constructs
2 Overview of methodology
2.1 Concepts
The models presented in this work consider the cache behavior of array references
accessed by regular looping constructs. The general form of a loop nesting
is shown in Figure 1; the loops are numbered from 1 to n, outer-to-innermost
respectively, and are assumed to be normalized such that they count upward
from zero in steps of one. The number of iterations performed by a loop at
a level k is labeled N k , and the variable used to index arrays by this loop is
labeled j k .
Array references considered are of the form:
where X is the name of an array, m is the number of dimensions of this array,
and ff k , fi k , and fl k are constants (with 1 -
Such array references can be rearranged into the form of a linear expression,
giving the address of the element accessed for a particular combination of values
of , the general form being
are constants. The base address of the array and the fi k
values combine to form B; the A values are derived from the loop multipliers
ff k and the dimensions of the array. Without loss of generality we assume that
array indices are in the Fortran style, and that all values are in terms of array
elements.
The concept of an iteration space is also important. The loop bounds
represent the full n-dimensional iteration space of the array reference
being examined. By limiting the range of certain loops the iteration space
can also be restricted. For example by only allowing j 1 to have the value 0, only
a single iteration of the outermost loop is specified. When modeling cache behavior
this restriction of iterations is a natural way to consider some problems.
However, in this work we will only need to restrict the upper bound of loops,
for a loop k this can be handled by "binding" a temporary value to N k .
Given two array references R 1 and R 2 , if their linear forms are identical
are said to be in translation. This means that the
access patterns of both references are identical, but offset by jB 1 -B 2 j elements
in the address space. References in translation with one another are said to be
in the same translation group.
2.2 Evaluation strategy
The function of the cache is to provide data reuse, that is, to enable memory
regions that have recently been accessed to be subsequently accessed with a
much smaller latency. Two basic forms of reuse exist: self dependence reuse
in which an array reference repeatedly accesses the same elements of an array,
and group dependence reuse in which elements are repeatedly accessed that
were most recently used by a different array reference.
When considering an array reference R, its reuse dependence ~ R is defined
as the reference from which its reuse arises. When it is a self dependence,
conversely, when R 6= ~ R it is a group dependence. Since it is possible for
more than two references to access the same data elements, when identifying
dependences ~ R is defined as the reference with the smallest reuse distance from
R, related to the number of loop iterations occurring between ~ R accessing an
element, and R reusing it.
Unlike the most well known system for classifying cache misses, the "three
C's'' model (compulsory, capacity and conflict misses [8]), the method presented
by this paper uses a two part model. Compulsory misses are defined as before,
but capacity and conflict misses are considered a single category-interference
misses-since the underlying cause is the same, reusable data being ejected
from the cache.
To predict the number of cache misses suffered by an array reference interference
is divided into several types, dependent on their source. Self interference
occurs when the reference obstructs its own reuse, internal cross interference
occurs due to references in the same translation group, and external cross interference
is caused by references in different translation groups. Sections 5
and 6 describe how these effects are modeled for self and group dependences
respectively.
A distinction is also made between the temporal and spatial components of
a reference's miss ratio. The spatial miss ratio is defined as the average number
of cache misses needed to load a single cache line of data; all three types of
interference contribute to this ratio, and are modeled in Section 4. The spatial
miss ratio is applied to the predicted number of temporal misses to give the
total number of cache misses for a reference. Repeating this procedure for all
references in the loop nesting gives the prediction for the entire code fragment.
Modeling cache footprints
A common requirement when modeling interference is to identify the effect on
the cache of accessing all references in a single translation group, for a specified
PSfrag replacements
C=La
a
(a) Cache representation
PSfrag replacements
line data
(b) Example of /
Figure
2: Examples of cache layout and set overlap /.
iteration space. Once the effect on the cache is known, it can be used to predict
the effect on the reuse of the references being examined, and from this the
resulting number of cache misses can be predicted.
A cache of size C, with L elements in a line, and associativity a can be
considered as a rectangular grid, C=La cache lines wide and a lines deep, as
shown in Figure 2(a). Each vertical column represents a single "set" of the cache,
each containing a lines. A mapping function determines which set a memory
element is stored in; the usual mapping function, which this paper examines,
is simply x mod (C=La), where x is the address of the element in question.
The line in the set that is actually used depends on the replacement strategy
employed. In this paper only the "least-recently-used" strategy is considered,
replacing the line in the set that was last accessed the earliest.
Given this view of the cache, the effect of a translation group on the cache
can also be visualized. For each of the C=La sets a certain number of the lines
contain data accessed by the translation group. This number can be thought of
as the "overlap" of each set, and is labeled /. Figure 2(b) shows an example for
a small 4-way set-associative cache 4). Data elements loaded into
the cache are darkly shaded, with the value of / for each set shown underneath.
To identify interference, cache footprints such as this are compared with the
footprints of the data being reused, either set-by-set or statistically as a whole.
The method of detecting interference is simple: it occurs wherever the combined
overlap of the footprints is greater than the level of associativity.
The model represents each cache footprint as a sequence of regions, each
region having a constant value of /, the overlap. As well as /, two other
parameters define each region, its first element of the region (i.e. a value between
zero and (C=a) - 1), and the number of elements in the region. Considering the
example footprint in Figure 2(b), it is clear that it is defined by the following
sequence of regions (start; size; /),
In the rest of this section of the paper, we show how footprints of this form can
be calculated efficiently for individual translation groups.
a
PSfrag replacements
oe k-1
oe k+1
cache sets
time
array footprint region cache data
Figure
3: Example of mapping array footprint regions into the cache.
3.1 Finding the cache footprint of a single reference
An accurate method of mapping a regular data footprint into a direct-mapped
cache has previously been presented in detail [7, 12]. As such we only consider
the problem briefly, extending the method given in [7] (which is descended
from [12]) to set-associative caches.
Given an array reference we wish to find the cache footprint of the data
it accesses for a particular iteration space, the form of which is defined by
the values . The structure of these array elements is defined by the
reference itself, the array dimensions, and the iteration space. For the majority
of array references encountered, the array footprint can be expressed using four
parameters: the address of the first element OE t , the number of elements in each
contiguous region S t , the distance between the start of two such regions oe t , and
finally the number of regions N t .
After identifying these four parameters, the array footprint they describe is
mapped into the cache to give the cache footprint of the reference in question.
The cache footprint is defined by parameters similar to those describing the
array footprint: the interval between regions oe, the number of regions N, and
the position of the first 1 region OE. Two parameters define the structure of the
data elements in each region, the level of overlap /, as defined in Section 3, and
S, the number of elements in the region divided by the overlap. Considering
Figure
2, / and S can be thought of as the average "height" and "width" of
each region in the footprint.
To find the parameters defining the cache footprint we use a recursive
method of dividing the cache into regular areas. At each level of recursion k,
areas of size oe k are mapped into a single area of size oe k-1 , illustrated in Figure
3 for part of a cache. A recurrence relation defines the sequence of oe k values,
representing how the array footprint regions map into the cache,
The sequence is truncated at a level s, where either all N t regions map into the
cache without overlapping, or overlapping occurs between regions. To detect
1 the "first" region is not the one nearest cache set zero, but she first region in the sequence
of N, this sequence may cross the cache boundary.
overlap from either end of an area of size oe k-1 a value ~
oe k is introduced, the
smallest distance between two regions in the area. If ~ oe k ! S t overlapping occurs
on level k, where,
~
At level s, the cache has been divided into oe 0 =oe s-1 areas of size oe s-1 ; in each
there are a certain number of footprint regions of size S t , each a distance ~ oe s from
the previous. There are r areas that contain n s
areas containing n s ,
In the simplest case, when i.e. the array footprint didn't wrap around
the end of the cache (no overlapping), . In the general case
when s ? 1, the distance between each area and the total number
of areas c. The position of the first region can also be found,
The overlap of a single area is found by dividing the total number of elements
in it by the distance from the start of the first region to the end of the last. The
average level of overlap / is found by combining the overlap for both types of
area,
(n
oe s , the distance from the start of the first region to
the end of the last. 2
To find S for a single area, the size of the "gaps" between regions is subtracted
from the distance from the start of the first region to the end of the last
region. As when finding / the values for both types of area are combined,
The function f i (x) gives the value of S for an area containing x regions, each ~
oe s
from the previous.
3.2 Combining individual cache footprints
Using the techniques presented in the previous section, the cache footprint
of a reference for a defined iteration space can be identified. This gives the
information necessary to predict how that reference interacts with the footprints
of other references, thus allowing interference to be detected.
Generally, however, there are more than two references in a loop nesting, and
therefore interference on a reference can originate from more than one source.
2 Note that x
As well as modeling the interference from each reference in the loop nesting, it
is also important that interference on a single element only be counted once.
Simply comparing the cache footprint of every reference in the loop with the
footprint of the reference being examined will not meet this requirement.
As noted in Section 2.1, it is possibly to classify the array references in a
loop nesting into translation groups, all members of a group have exactly the
same access pattern, the only difference being that the patterns are offset from
one another in the address space. This allows the references in a translation
group to be combined into a single cache footprint-it is this meta-footprint
that is used to identify interference.
The problem can be stated as follows: given q references in translation:
is necessary to find the cumulative cache footprint of these refer-
ences, assuming that the array footprint of the references is defined by the parameters
and the values OE t
q . The combined cache footprint
is defined as a sequence of regions defined by triples, (OE; the position of
the region, the size in elements, and the level of overlap, as shown in (1).
3.2.1 Finding the one-dimensional footprint
Examining the calculations in Section 3.1 shows that the only parameter of
the cache footprint depending on OE t is OE, the position of the first region, defined
as mod (C=a). It follows therefore that all references R
share the same cache footprint, but with individual values of OE: (OE t
This property is easy to visualize, form a cylinder from the rectangular
representation of the cache in Figure 2(a), such that the first and last sets are
adjacent to one another. The surface area of the cylinder represents the cache.
If we project a cache footprint onto the cylinder, such that it starts at the
first element of the cache (i.e. by rotating the footprint OE t positions 3
around the circumference of the cylinder we have the actual cache footprint.
This simplifies the problem of finding the combined cache footprint, instead of
computing q footprints and merging them, it is only necessary to compute one
footprint, then consider rotated copies.
Generating the position of every footprint region. From the definition
of OE given above, the start and end points of each region in the cache footprint of
each reference can be enumerated. The region starting positions for reference R i
are defined by the series,
\Theta
and the position of the end of each region by,
\Theta
3 Or rather OE t mod (C=a) since the circumference of the cylinder is C=a.
One possible method of merging all q footprints would be to enumerate the
start and end positions of each reference, and then sort them into smallest-
first order. Fortunately, there is a much more efficient method. Each rotated
footprint can only cross the boundary between cache position C=a and position
zero once. This allows the start and end positions of each region to be generated
in numerical order, by generating the points after the cache boundary, followed
by the points before the cache boundary.
First the starting points of each region in the footprint of reference i are con-
sidered. The first region (when counting from zero) to start after position C=a,
is,
start after
oe
The list of starting positions in (6) can now be split in two and recombined, so
that the list of positions in ascending order is,
\Theta
\Theta
assuming that the ++ operator concatenates two lists.
A similar method can be used to generate the end points of each region in
the footprint of reference R i . The first end point after cache position C=a is,
oe
and the list of end points in numerical order is,
\Theta
\Theta
Merging the q footprints. Given q lists of region start positions, and q lists
of end positions as defined in the previous section it is straightforward to construct
a new list of regions, such that no two regions overlap. The end product
of this process is a sequence of triples, each of the form (OE; ]). The
two values OE and S define the position and size of the region; v is a bit-vector
such that v the region is a subset of reference i's individual footprint.
It can be seen that / is a consequence of v, since the level of overlapping in a
region is directly related to the references being accessed in that region.
The merging process is straightforward since the lists of region boundaries
are known to be in ascending numerical order. A working value of v is main-
tained, initially set to reflect the references whose footprints wrap around from
the end to the start of the cache. While there are elements left in any of the
2q lists, the list with the smallest first element is found. This element is deleted,
and a footprint region is created from the previously found point to the current
point, with the current value of v. Assuming that the list refers to reference R k ;
if it is the list of start points then v k is set to one, otherwise it is set to zero.
3.2.2 Finding the cumulative overlap of a region
After merging the reference's footprints as in the previous section the structure
of the translation group's cache footprint is almost complete. Instead of the
representation that is required, it is in the form (OE;
problem then, is to calculate / given vector v.
The average level of overlap of a reference's cache footprint has already been
calculated as /, in (5). Using the same logic as in Section 3.2.1 all references
in the translation group must have the same value of /.
A natural method of finding / is to simply multiply / by the number of bits
in v that are set, i.e. the number of references in the region. On considering how
caches work, it can be seen that this method is only guaranteed to work when
no two references access the same array. If two or more references do access the
same array, there is the possibility that there could be an intersection between
the two sets of array elements accessed. If such an intersection occurs, these
elements will only be stored in the cache once, not twice as predicted if we take
2/ as the overlap of the two references combined.
This feature means that the amount of sharing between any two references
must be examined. We define this by a ratio, ranging from zero, if they have no
elements in common, to one, when all elements are accessed by both references.
This ratio, sharing(R x ; Ry ) for two references R x and Ry , is calculated from the
array footprint of the translation group-the parameters S t , oe t , N t , and OE t
defined in Section 3.1.
Calculating sharing(R x ; Ry ). The definition of sharing(R x ; Ry ) consists of
two expressions: the degree of sharing between the two array footprints when
considered as two contiguous regions, and the degree of sharing between the
individual regions inside the footprints. The distinction between these two
concepts is shown in Figure 4 for the two references R x and Ry , first as single
regions, then as a sequence of regions.
Considering the footprints as two single regions (Figure 4(a)) it can be seen
that the distance between the two regions is jOE
y j, subtracting this value
from the total extent of the region N t oe t gives the total number of shared ele-
ments. Hence the ratio of shared elements is (N t oe t - jOE
y
The level of sharing between two regions of the footprint (Figure 4(b)) is
found in a similar manner. The distance between two possibly overlapping
regions is jOE
overlapping could occur in either direction
the smallest possible distance between overlapping regions ffi is defined as,
y
y
then there is no sharing, otherwise S t - ffi elements are shared between
the two regions. Then the ratio defining the level of sharing between the two
regions is (S t - ffi)
Multiplying the two sharing ratios, that for the footprints as a whole and
that for two regions, gives the overall ratio of shared elements between the two
PSfrag replacements
R x
Ry
(a) Footprints as single regions
PSfrag replacements
R x
Ry
(b) Footprints as multiple regions
Figure
4: Array footprint sharing
footprints, i.e.
y
!/
Finding / of a region. The sharing(R x ; Ry ) function defined in (7) allows
the combined level of overlap between two references to be found. For example
if /Rx[Ry is the level of overlap occurring when R x and Ry access the same
region of the cache, /Rx and /Ry are the overlaps of the individual references,
and /Rx "Ry is the overlap shared between R x and Ry , then,
The second line of this equation follows since only references in translation are
merged in this way, and the intersection is directly related to how many elements
the two references share (as an average across the entire cache).
To find / , the average level of overlap across all references fR
is necessary to extend the union operator shown above to include an arbitrary
number of references. Considering (8) it's evident that there is a similarity
between finding the combined overlap and the number of elements in a union
of sets. That is, (8) is analogous to
The general form of this expression for the number of elements in a union is,
where the
stands for the summation of all i-element combinations
of . The expression is analogous to /R1 [\Delta\Delta\Delta[R n in exactly
the same way that (9) is analogous to to (8), and therefore
It is still necessary to define the average overlap of an intersection between an
arbitrary number of references. A two-reference intersection was shown in (8),
this can be extended to an arbitrary number of references,
where the symbol
stands for the product of all two-element combinations
of R i and R j .
Now it is possible to find / , the average overlap of a cache footprint region
containing references defined by the vector v. Computing (10) for the references
included in the region, i.e. the set fR
3.2.3 Notes on optimizing the calculation of /
The method shown in the previous paragraphs is obviously highly combinatorial
in nature. When the bit vector v contains n ones, the number of multiplications
required is,
this grows rapidly, making computing / slow for relatively small values of n
(for example one of the main
reasons for using analytical methods is their increased speed this is clearly unde-
sirable. Fortunately two straightforward modifications push the combinatorial
barrier back some distance.
Firstly, the value of / does not have to be completely evaluated at the
boundary of each footprint region. Considering the identity,
shows that / can be adaptively calculated from the previous region's value
when a single reference enters or leaves the union. This approximately halves
the number of multiplications required.
Secondly, since one of the constraints of the model is that an array may
not overlap any other arrays, there can be no sharing of data elements between
references accessing different arrays. This means that only a subset of vector v
need be examined when computing / -those where v reference
R i accesses the same array as that accessed by the array reference whose
state changed at the region boundary. Depending upon the distribution of array
references to arrays, this modification can decrease the complexity of the
calculation by orders of magnitude.
Modeling spatial interference
As noted in Section 2 the temporal and spatial cache effects of an array reference
are modeled separately. Spatial reuse occurs when more than one element in a
cache line is accessed before the data is ejected from the cache. For a reference R
the innermost loop on which spatial reuse may occur is labeled l s , where
l
The spatial miss ratio of a reference, labeled M s , is defined such that multiplying
it by the predicted number of temporal misses suffered by a reference
predicts the actual number of cache misses occurring. This ratio encapsulates
all spatial effects on the reference, and is found by combining four more specific
miss ratios: the compulsory miss ratio C s , the self interference miss ratio S s ,
the internal cross interference miss ratio I s , and finally the external cross interference
The value of C s for a particular reference follows directly from the array
footprint of the reference defined over all loops It is the ratio between
the number of cache lines in each footprint region and the number of referenced
elements within each region.
When studying the level of interference affecting a spatial reuse dependence
it is necessary to examine what happens between each iteration of loop l s .
Figure
5 illustrates this for self interference. The left hand side of the figure
shows a square matrix Y being accessed by the array reference Y(2j on the
right is shown how this maps into the cache, both over time and for a complete
iteration of loop j 1 (assuming a 4-way associative cache). The elements that
may interfere with Y(6; 0) reusing the data loaded into the cache by Y(4; are
shaded. The three types of spatial interference are considered in the following
sections.
4.1 Calculating spatial self interference
As shown in Figure 5 the reference being modeled can obstruct its own spatial
reuse; this happens when the number of data elements accessed on a single iteration
of loop l s that map to a particular set in the cache is greater than the level
of associativity. To analyze this mapping process the recurrence shown in (2)
is used, but with slightly different array footprint parameters. The distance
between each footprint region oe t is defined by the distance between elements
accessed on successive iterations of loop l s (see Figure 5), and the size of each
PSfrag replacements
Y
time
C=a
a
elements being reused
elements that may interfere
Figure
5: Example of spatial reuse from Y(2j 1
footprint region is defined as the size of a cache line L to ensure that interference
between lines is detected.
As in Section 3.1, the result of the mapping process is that the cache is
divided into oe 0 =oe s-1 areas of size oe s-1 ; each with a certain number of footprint
regions, each a distance ~
oe s from the previous. There are r areas that contain
areas containing n s (Section 3.1).
By examining each of the two types of area separately, calculating the value
of S s in each, and combining the two values, it is possible to predict the overall
level of self interference,
where the function f s (x) gives the probability that an element in an area of size
oe s-1 , containing x elements, does not suffer from spatial interference.
It is immediately possible to identify two special cases,
1. if ~ oe elements in the area occupy the same cache set; if the
number of elements x is greater than the level of associativity interference
occurs, thus
when ~
2. if there is only one element per set and no overflow between neighboring
areas, then reuse must be total,
In the general case the solution is not so straightforward, the main complication
being the possibility that the distance from the first to the last element in the
area (i.e. x~oe s ) is greater than the size of the area itself, and therefore the
elements "wrap-around" the end of the area, possibly interfering with those at
the start.
To handle this a hybrid analytical-simulation technique is used: each of the
x elements in the area has L different positions in a cache line where it might
occur, each position is analyzed for whether reuse can occur or not, leading to
the overall probability of reuse for that element. Repeating for the other x - 1
elements, and combining all the individual probabilities gives the value of f s (x).
For an element y from is possible to list the positions in the
cache of the elements surrounding it,
-stride if oe s ? 0
where the stride of a reference is the distance between elements accessed on
successive iterations of the spatial reuse loop l s .
The essence of the problem is now as follows. From points(y), deduce the
number of points that occur in the cache line-sized region z
that the points wrap around to zero at position oe s-1 . A generalized form of
the series defined above is
with,
For this general series the number of points within an interval z
including the wrap around effect, is given by,
oe
\Upsilon
oe
\Upsilon
min
A
A
where z
Thus to find the total number of elements within a particular cache-line sized
interval the above expression is evaluated for both before(y) and after(y), so
that the total number of elements in a particular interval z
If this value, the number of elements in a particular line, is greater than the
level of associativity a, then self interference occurs; by averaging over the L-1
possible positions for the start of a line containing the interval y, the probability
of reuse can be found. By repeating this process for the x - 1 other elements in
the area the overall probability, and hence S s , can be calculated.
4.2 Internal spatial cross interference
As well as being caused by the reference itself, spatial interference may also arise
due to the other references in the same translation group. When the number
of data elements mapping to a particular cache set, on a single iteration of
loop l s , is greater than the level of associativity a, interference will occur. This
phenomena is often referred to as "ping-pong" interference, and may affect
performance massively since it is possible for all spatial reuse by the reference
to be prevented.
When considering a reference R, ping-pong interference is detected by calculating
the cache footprint of all references in the translation group, for a single
iteration of the spatial reuse loop (i.e. let N Considering only the
regions where / ? a, if any are less than L elements from the position of
the first element accessed by R, i.e. OE R mod (C=a), then ping pong interference
occurs.
Assuming that the closest footprint region before OE R mod (C=a) is positions
away, and the closest region after R is ffi a positions away, then the miss
ratio due to internal interference is defined as follows,
I
4.3 External spatial interference
After considering the interference from the reference's own translation group,
interference from the other translation groups-external interference-must be
modeled. Each group is examined in turn, the overall miss ratio due to external
being the sum of each group's individual external interference
ratio.
For a reference R, with spatial reuse on loop l s , the probability PR , that
accessing a random data element will find an element in a set containing data
spatially reused by R, is defined by,
C=a
where NR and SR are the number and size of regions in reference R's cache
footprint on loop l s respectively (see Section 3.1).
Restricting the iteration space to a single iteration of loop l s (i.e. let N
1), the cache footprint of each translation group (of which R is not a member)
is examined. By counting the number of elements in these footprints that could
cause spatial interference on R, and multiplying by PR , a prediction of the
number of misses is made.
If the average level of overlap for the translation group containing R is /R ,
and the footprint of each other translation group is represented by a sequence
of (OE; then an individual footprint region can possibly interfere
with R only if /R does occur, the number of
cache misses for that set can not be greater than the actual number of elements
in the set. This leads to the definition of the following function giving the "miss
overlap",
Mapping this function, multiplied by the size of each region, over the cache
footprint of each translation group gives the total number of elements accessed
by the group that might cause a cache miss 4 . Multiplying this value by PR , and
dividing by the total number of iterations made by loop l s , gives the external
miss ratio for a single translation group G,
PR
(/-miss
where the symbol
stands for the summation across all of the translation
group's cache footprint regions (OE;
groups G, such that R 62 G, the overall value of E s is found.
5 The cache behavior of a self dependence
As noted in Section 2, a self dependence occurs when an array reference accesses
particular data elements more than once. This happens when one or more of
the loop variables are not used by the reference. For example, the array
reference does not use j 2 , and therefore all iterations of loop 2 access
exactly the same set of elements, namely fA(0; )g. The inner-most
loop on which reuse occurs is defined as loop l, where g.
In theory, each time loop l is entered the first iteration would load the referenced
elements into the cache, and subsequent iterations reuse them. That the
first iteration of loop l must load the elements gives the number of compulsory
misses,
Y
that is: the spatial miss ratio, multiplied by the number of times loop l is
entered, multiplied by the number of unique elements referenced.
But the cache capacity is limited-it may not be possible to hold all elements
referenced by loop l in the cache at once. This factor is not only dependent on
whether the size of the cache is greater than the number of elements, as with
spatial reuse the accessed elements may map into the cache in such a way as
to prevent reuse. Although using a cache with high associativity can prevent
4 When the referenced array is significantly smaller than the number of sets in the cache,
only footprint regions that actually overlap with the array are considered.
interference in certain cases, as the number of elements accessed increases the
problem may return.
5.1 Self interference
Self interference on a reference is modeled by mapping the array footprint of the
elements accessed by a single iteration of loop l into the cache, removing those
elements that fall in sets with overlap greater than the level of associativity.
Subtracting the number of elements left from the original number of elements
gives the number of cache misses per iteration.
We use the same mapping process as shown in Section 3.1, with one important
modification, the function f i (x) is replaced by f r (x) (and the way in which
/ is calculated is changed to reflect this). Whereas f i (x) gave the number of
sets that could interfere in an area containing x regions, f r (x) gives the number
that can be reused, i.e. those where / - a. Given f r (x) the number of reusable
elements in the footprint follows as NS/, and therefore the total number of
cache misses due to self interference is
Y
-the number of times loop l is entered multipled by the number of cache misses
each iteration (excluding when which is handled by the compulsory miss
calculation shown in (13)).
The definition of function f r (x) uses a similar method to that shown in Section
4.1 for calculating spatial self interference. The structure of the cache section
being examined was described in Section 3.1; an area of size oe s-1 containing
x regions of size S t , each at an interval ~ oe s from the previous. The first region
is located at the beginning of the area, and the regions wrap around the end of
the area (i.e. the position in the area of region k is actually (k~oe s ) mod oe s-1 ).
For an area with this structure, the function f r (x) must calculate the number
of positions in which the level of overlap is less than or equal to the level of
associativity, i.e. where no interference occurs. For a single position z in the
area, the level of overlap (i.e. the number of regions crossing this point) is
given by the number of regions beginning before this point minus the number
of regions ending before it. To include the wrapping effect this expression is
summed over all possible "wrap arounds" in which a region appears, i.e.,
oe
\Upsilon
min
~
oe s
where z
A possible definition f r (x) would be to test every position in the area, i.e.
count the number of times that overlap(z) - a.
Fortunately there is a more efficient method: since there are only x footprint
regions, the value of overlap(z) can only change a maximum of 2x times (at
the start and end of each region). Using a similar method to when finding
the one-dimensional footprint of a translation group (see Section 3.2.1), these
2x positions are enumerated in ascending order, and the atomic regions they
define are examined.
Finally, the definition of / in (5) includes positions in the area where reuse
cannot occur (since it is still relevant when calculating interference). However,
when looking at the reuse of a footprint it is necessary for / to be the average
overlap of the positions in the footprint where reuse does occur. This can be
calculated while computing the value of f r (x).
5.2 Internal cross interference
After examining the level of self interference on a self dependent reference the
cache footprint of the data not subject to self interference is known; characterized
by the parameters S, oe, N and /. It is still uncertain whether or not these
regions of the cache can be reused since data accessed by the other array references
in the loop nesting may map to the same cache sets, possibly preventing
reuse.
Interference from other references in the same translation group is considered
first. The cache footprint of these references is identified (using the techniques
shown in Section 3) and then compared region by region with the footprint of
the data not subject to self interference. Interference can only occur wherever
the two footprints overlap, and only when the combined level of overlap is
greater than the level of associativity, that is when Assuming
that two footprint regions overlap for size positions, then the number of misses
occurring on each iteration of loop l is
size \Theta /-miss (/; /
The summation of this expression over all sections of the cache where two footprint
regions overlap gives the total number of cache misses on each iteration
of the reuse loop; multiplying by l gives the total number of misses.
To increase the accuracy of the next stage-predicting the level of external
interference-the values of NS and / (the number of reusable positions and
average overlap) are adjusted to take account of internal interference. The
number of reusable positions after considering internal interference NS 0 is the
combined size of all regions where interference doesn't occur, and the adjusted
overlap / 0 is the average value of /+ / across all these regions.
5.3 External interference
The final source of temporal interference on a self dependence to be considered
is external cross interference. This is interference arising from references in
other translation groups to the reference being examined. Unlike when modeling
internal cross interference, it is not possible to simply compare the two
cache footprints (the reference's possibly reusable data, and the footprint of the
interfering translation group) exactly because they are not in translation. The
footprints are "moving" through the cache in different ways and hence incom-
parable. Instead, a statistical method is used, based on the dimensions of the
two footprints-the total size and the average overlap.
Similarly to when modeling external interference on spatial dependences
(see Section 4.3) each external translation group is considered in turn. The
number of footprint positions that could possibly cause interference are found
by summation over the cache footprint of the group. To find the average number
of cache misses this quantity is multiplied by the size of the reusable footprint
and divided by the number of possible positions,
external
(S \Theta /-miss (/
C=a
This gives the number of misses on each iteration of loop l caused by a particular
translation group. Summing this expression over all external groups and
multiplying by the total number of iterations of loop l gives the actual number
of cache misses due to external interference.
6 Modeling group dependences
A group dependence occurs when an array reference reuses data that was most
recently accessed by another reference in the same translation group. For a
reference R the reference that it is dependent upon is denoted ~ R; Section 2.2
has described how dependences are identified.
The definition of the spatial miss ratio given in Section 4 must be altered
slightly to model group dependences, it must also include any spatial group
reuse occurring. This is when R is in the same cache line as ~ R a certain number
of times per every L elements accessed. If the constant distance between the
two references, B ~ R - BR , is less than the size of a cache line, then this is the
number of times that R must load an element itself per cache line. Therefore
the actual spatial miss ratio is defined by,
The number of compulsory misses is defined by the number of elements
accessed only by R, not by ~ R, multiplied by the spatial miss ratio. Since the
defined in (7) gives the ratio of elements shared between
R x and Ry , we have that
compulsory
s
Y
sharing
R; ~ R
For a reference R, the innermost loop on which group reuse occurs is defined
as
l
ff
where m is the number of dimensions in the array being accessed. To identify
cross interference on a group dependence it is only necessary to examine the
period between ~ R accessing an arbitrary element and R reusing it. This is defined
as iterations of loop l g
with k the innermost dimension of the array where the fi k constants of the two
references differ.
Consider for example, the case when
Here l 2, that is, after ~ R accesses element A(j 2 ; 2), two iterations
of loop 1 pass before R accesses the same element. Interference occurs if the
element has been ejected from the cache during these two iterations.
6.1 Internal interference
Internal cross interference is found by examining the cache footprint of the
translation group of R for the first ffi g iterations of loop l g
, i.e. the iteration
space with . For each region in the footprint
that contains data accessed by R the probability of interference is calculated, the
maximum probability across the whole footprint is then the actual probability
of internal interference. For a footprint region with average overlap / , this
probability is defined as,
i.e. for interference to definitely occur / ? a
definitely doesn't occur; there is a gradient between these two certainties.
The number of cache misses is defined as the number of elements that could
theoretically be reused, multiplied by the maximum value of P i (/ ) and the
spatial miss ratio,
int.
s@
lg
Y
sharing
R; ~ R
6.2 External interference
When the maximum value of P i is less than 1, and therefore internal interference
is not total, external cross interference must also be considered. Again the
iteration space is defined as ffi g iterations of loop l g
, but this time the cache
footprints of the translation groups that R is not a member of are examined.
For each such group, the number of cache misses caused is found by counting
the number of positions in its footprint where interference may occur, and applying
the same probabilistic method used when predicting external interference
on a self dependence (see Section 5.3). Assuming that the cache footprint of
the translation group containing R has an average overlap of / 0 in the regions
containing data accessed by R (this can be calculated while finding internal
interference), then a footprint region with overlap / may possibly cause interference
if a. The actual number of misses per translation group is
defined as
ext.
s@
/'+/ ?a
(S \Theta /-miss (/ ; 1))A
\Theta (C=a) (1 - maxP i )@
Y
7 Example results
To demonstrate the validity and benefits of the techniques described, this section
presents experimental results obtained using an implementation of the model.
Code fragments are expressed in a simple language which allows the details of
the arrays being accessed, the loop structures, and the array references themselves
to be specified. Here three examples typical of nested computations are
shown, chosen for their contrasting characteristics to ensure that all parts of
the cache model are exercised. Each manipulates matrices of double precision
values, arranged in a single contiguous block of memory. They are:
1. A matrix-multiply, consisting of three nested loops, containing four array
references in total. Each reference allows temporal reuse to occur within
one of the loops, one reference may be subject to considerable spatial
interference. The Fortran code is shown in Figure 6(a).
2. A "Stencil" operation, from [10]. This kernel shows group dependence
reuse, and doesn't always access memory sequentially. See Figure 6(b).
3. A two dimensional Jacobi loop, from [2], originally part of an application
that computes permeability in porous media using a finite difference
method. This kernel exhibits large amounts of group dependence reuse,
and contains significantly more array references than the others. The
matrices IVX and IVY contain 32-bit integers. See Figure 6(c).
Each example kernel has been evaluated for a range of cache parameters,
comparing the predicted miss ratio against that given by standard simulation
techniques 5 . The average percentage errors are shown in Table 1.
The results for are shown in
Figure
7 for the three example kernels. Miss ratio and absolute error are plotted
against the width and height of the matrices. Also shown, in Table 2, are the
range of times taken to evaluate each problem on a 167MHz SUN ULTRA-1
workstation, for a single cache configuration.
5 A locally written cache simulator was used that accepts loop descriptions in the same
form that the analytical model uses. It has been validated by comparing its results with Hill's
Dinero III trace-driven simulator [8].
ENDDO
ENDDO
ENDDO
(a) Matrix multiply
ENDDO
ENDDO
(b) Stencil
ENDDO
ENDDO
(c) 2D Jacobi
Figure
Example kernels
The experimental data presented in the previous section shows that the predictions
made by the model are generally very accurate: the majority of average
errors are within ten percent, with all but three of the fifty four examples having
average errors of less than fifteen percent. When combined with the increased
speed of prediction we believe that the analytical approach is more practical
than simulation when examining the individual kernels of an application.
One of the motivations for this work was to minimize the time taken when
evaluating a program fragment. As expected the analytical model is much
quicker to compute than a simulation, typically by several orders of magnitude,
even with the smallest problem sizes. As the number of memory references
grows the gulf widens: simulation time increasing proportionally to the number
of accesses, the time needed to evaluate the analytical model staying mostly
constant. The Jacobi example is the slowest to evaluate analytically because
it has eighteen array references to evaluate, compared to Stencil's six and the
matrix multiply's four. Even so, the combinatorial effects that might have been
feared are not a problem.
It is also clear from the miss ratio plots that using set-associative caches
C=16384, L=32, a=1 N
predicted miss-ratio
difference from simulation
C=16384, L=32, a=2 N
predicted miss-ratio
difference from simulation
C=16384, L=32, a=4 N
predicted miss-ratio
difference from simulation
C=32768, L=16, a=1 N
predicted miss-ratio
difference from simulation
C=32768, L=16, a=2 N
predicted miss-ratio
difference from simulation
C=32768, L=16, a=4 N
predicted miss-ratio
difference from simulation
Matrix multiply
C=16384, L=32, a=1 N
predicted miss-ratio
difference from simulation
C=16384, L=32, a=2 N
predicted miss-ratio
difference from simulation
C=16384, L=32, a=4 N
predicted miss-ratio
difference from simulation
C=32768, L=16, a=1 N
predicted miss-ratio
difference from simulation
C=32768, L=16, a=2 N
predicted miss-ratio
difference from simulation
C=32768, L=16, a=4 N
predicted miss-ratio
difference from simulation
Stencil
C=16384, L=32, a=1 N
predicted miss-ratio
difference from simulation
C=16384, L=32, a=2 N
predicted miss-ratio
difference from simulation
C=16384, L=32, a=4 N
predicted miss-ratio
difference from simulation
C=32768, L=16, a=1 N
predicted miss-ratio
difference from simulation
C=32768, L=16, a=2 N
predicted miss-ratio
difference from simulation
C=32768, L=16, a=4 N
predicted miss-ratio
difference from simulation
Figure
7: Predicted miss ratios and absolute errors for
configurations.
Experiment a L
Matrix 2 3.85 7.02 4.07 6.22 4.89 6.12
Multiply 4 2.42 4.89 3.29 3.51 3.90 3.97
Table
1: Average percentage errors of example predictions when compared with
simulated results.
Analytical Model Simulation
Experiment Min. Max. Mean Min. Max. Mean
Matrix mult. 0.00093
Stencil
Table
2: Calculation times for experiments (seconds.)
does not avoid the problem of cache interference. Even for a 4-way associative
cache there are still large variations in miss ratio, especially in the Stencil and
Jacobi kernels, i.e. as the number of array references increases. By using using
well known techniques such as padding array dimensions and controlling base
addresses, guided by an analytical model such as presented here, the variations
can be reduced to decrease the miss ratio.
A benefit of using analytical models that has not yet been mentioned is the
extra information available through using analytical models. When trying to
lower the number of cache misses in a program it is important to know both
where and why the cache misses occur. Due to the structure of the method
presented in this paper both requirements can be met simply by examining
the outputs of the component models. For example, with the matrix multiply
kernel we can examine both the miss ratio of each reference (Figure 8(a)), and
the miss ratio due to each type of interference (Figure 8(b)). These show that
the vast majority of the misses are due to reference Y(J,K), and that between
and 90 percent of the interference is self interference (in this case spatial self
interference, due to array Y being accessed non-sequentially).
9 Conclusions
A hierarchical method of classifying cache interference has been presented, for
both self and group dependent reuse of data, considering both temporal and
(a) Reference miss ratios10305070900 20 40
Compulsory
Internal
External
(b) % miss ratio by type
Figure
8: Examining the Matrix multiply,
spatial forms. Analytical techniques of modeling each category of interference
have been developed for array references in loop nestings. It has been shown
that these techniques give accurate results, comparable with those found by
simulation, and that they can be implemented such that predictions can be
made at a much faster rate than with simulation. More importantly, the prediction
rate has been shown to be dependent on the number of array references
in the program, rather than the actual number of memory accesses (as with
simulation).
It is envisaged that the benefits of the models-accuracy and speed of
prediction-will allow their use in a wide range of situations, including those
that are impractical with more traditional techniques. An important example
of such a use will be run-time optimization of programs, using analytical models
of the cache behavior of algorithms to drive the optimization process. Areas
that will be addressed in future work include such optimization strategies, as
well as extensions to the model itself. It is also intended to use the techniques
as part of a general purpose performance modeling system [11].
Acknowledgements
. This work is funded in part by DARPA contract N66001-
97-C-8530, awarded under the Performance Technology Initiative administered
by NOSC.
--R
An analytical cache model.
Skewed associativity improves program performance and enhances predictability.
Tile size selection using cache organisation and data layout.
Automatic cache performance prediction in a parallelizing compiler.
Influence of cross- interferences on blocked loops: A case study with matrix-vector multi- ply
Cache miss equa- tions: An analytical representation of cache misses
Predicting the cache miss ratio of loop-nested array references
Aspects of Cache Memory and Instruction Buffer Per- formance
The cache performance and optimizations of blocked algorithms.
A quantitative analysis of loop nest locality.
An overview of the CHIP 3 S performance toolset for parallel systems.
Cache interference phenomena.
A data locality optimizing algorithm.
--TR
--CTR
Arijit Ghosh , Tony Givargis, Cache optimization for embedded processor cores: An analytical approach, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.9 n.4, p.419-440, October 2004
D. Andrade , B. B. Fraguela , R. Doallo, Analytical modeling of codes with arbitrary data-dependent conditional structures, Journal of Systems Architecture: the EUROMICRO Journal, v.52 n.7, p.394-410, July 2006
E. Papaefstathiou, Design of a performance technology infrastructure to support the construction of responsive software, Proceedings of the 2nd international workshop on Software and performance, p.96-104, September 2000, Ottawa, Ontario, Canada
B. B. Fraguela , R. Doallo , J. Tourio , E. L. Zapata, A compiler tool to predict memory hierarchy performance of scientific codes, Parallel Computing, v.30 n.2, p.225-248, February 2004
G. R. Nudd , D. J. Kerbyson , E. Papaefstathiou , S. C. Perry , J. S. Harper , D. V. Wilcox, Pace--A Toolset for the Performance Prediction of Parallel and Distributed Systems, International Journal of High Performance Computing Applications, v.14 n.3, p.228-251, August 2000
Chung-hsing Hsu , Ulrich Kremer, A Quantitative Analysis of Tile Size Selection Algorithms, The Journal of Supercomputing, v.27 n.3, p.279-294, March 2004
Lakshminarayanan Renganarayana , Sanjay Rajopadhye, A Geometric Programming Framework for Optimal Multi-Level Tiling, Proceedings of the 2004 ACM/IEEE conference on Supercomputing, p.18, November 06-12, 2004
Jingling Xue , Xavier Vera, Efficient and Accurate Analytical Modeling of Whole-Program Data Cache Behavior, IEEE Transactions on Computers, v.53 n.5, p.547-566, May 2004
Basilio B. Fraguela , Ramn Doallo , Emilio L. Zapata, Probabilistic Miss Equations: Evaluating Memory Hierarchy Performance, IEEE Transactions on Computers, v.52 n.3, p.321-336, March
Siddhartha Chatterjee , Erin Parker , Philip J. Hanlon , Alvin R. Lebeck, Exact analysis of the cache behavior of nested loops, ACM SIGPLAN Notices, v.36 n.5, p.286-297, May 2001
Hans Vandierendonck , Koen De Bosschere, Highly accurate and efficient evaluation of randomising set index functions, Journal of Systems Architecture: the EUROMICRO Journal, v.48 n.13-15, p.429-452, May | performance evaluation;analytical modeling;data locality;set-associative;cache modeling;cache interference |
323323 | Leading-One Prediction with Concurrent Position Correction. | AbstractThis paper describes the design of a leading-one prediction (LOP) logic for floating-point addition with an exact determination of the shift amount for normalization of the adder result. Leading-one prediction is a technique to calculate the number of leading zeros of the result in parallel with the addition. However, the prediction might be in error by one bit and previous schemes to correct this error result in a delay increase. The design presented here incorporates a concurrent position correction logic, operating in parallel with the LOP, to detect the presence of that error and produce the correct shift amount. We describe the error detection as part of the overall LOP, perform estimates of its delay and complexity, and compare with previous schemes. | INTRODUCTION
Leading-one prediction is used in floating-point adders to eliminate the delay of the
determination of the leading-one position of the adder output from the critical path. This
determination is needed to perform the normalization of the result. Since the latency
of floating-point addition is significant in many applications, this prediction might be of
practical importance.
The direct way to perform the normalization is illustrated in Figure 1a). Once
the result has been computed, the Leading-One Detector (LOD) 1 counts and codes the
number of leading zeros and then, the result is left shifted. However, this procedure
can be too slow, since it is necessary to wait until the result is computed to determine
the shift amount. Alternatively, as shown in Figure 1b), the normalization shift can
be determined in parallel with the significands addition. The Leading-One Predictor
anticipates the amount of the shift for normalization from the operands. Once
the result of the addition is obtained, the normalization shift can be performed since
the shift has been already determined. This approach has been used in some recent
floating-point unit design and commercial processors [3, 7, 8, 9, 11, 18, 19].
As described below, the basic schemes developed for the leading-one predictor give
the position with a possible error of one bit. Because of this, a second step consists of
detecting and correcting this error, but this step increases the overall delay. To avoid this
delay increase, we propose a correction procedure which detects the error in parallel with
1 The LOD is also called LZD (Leading Zero Detector)
2 The LOP is also called LZA (Leading Zero Anticipator)
LOD
input A input B
result
result
input A input B
SIGNIFICAND
ADDER / SUB.
shift
coding
a) b)
ADDER / SUB.
SIGNIFICAND
shift
coding
Figure
1: Magnitude Addition and Normalization for a Floating-point Adder Unit.
the determination of the position, so that the correction can be performed concurrently
with the first stage of the shifter. The evaluation and comparison presented show that
it is plausible that this can be achieved in a specific implementation, both for the single
datapath and the double datapath cases.
1.1 Previous work
Several LOPs has been recently proposed in the literature [4, 17, 19]. We briefly discuss
each of them.
The LOP described in [19] has the general structure shown in Figure 2a). As
described in detail in Section 2, the pre-encoding examines pairs of bits of the operands
and produces a string of zeros and ones, with the leading one in the position corresponding
to the leading one of the addition result. This string is used by the LOD to produce an
encoding of the leading-one position. Because of the characteristics of the pre-encoding
the resulting leading-one position might have an error of one bit. Therefore, it is necessary
to correct the error by an additional one bit left shift, called the compensate shift and
performed after the basic normalization shift. This compensate shift increases the delay
of the floating-point addition. The design in [19] is performed for the case in which
during the alignment step, the operands are compared and swapped so that the result of
the subtraction is always positive. This simplifies the implementation of the adder and
of the LOP. However, it cannot be used in the case of floating-point adders with double
datapath, as explained further in Section 5.
B:
A:
B:
A:
B:
A:
compensate
shift
normalization
LOD
ADDER
O P
Pre-encoding
ADDER
LOD
Pre-encoding
O P
ADDER
detection
tree
c)
shift
LOD
O P
correction
Pre-encoding
normalization
shift
a)
carry select
correction
shift
normalization
shift
carries
Figure
2: LOP architectures. a) Without concurrent correction. b) With concurrent
correction based on carry checking. c) With concurrent correction based on parallel
detection tree
In [4] a LOP with concurrent position correction based on carry checking is de-
scribed. Its general structure is shown in Figure 2b). It has been designed as part of
a multiply-add-fused (MAF) unit [5, 9]. As in the previous scheme, the LOP has the
possibility of a wrong prediction by one position. To perform the correction, the carry
in the adder going into the anticipated leading-one position is checked and the position
is corrected according the carry value. The correction is done in the last stage of the
normalization shift. Therefore, in principle the correction does not increase the delay.
However, as we show in Section 5, the carry detection is slow so that it introduces an
additional delay in the floating-point addition. A similar scheme is proposed in [17].
1.2 Contribution
The main contribution of this paper is to propose and evaluate a method to perform the
correction to the one position error of the basic LOP during the normalization shift without
producing any delay degradation. This is achieved by detecting the error condition
concurrently with the basic LOP (and therefore, with the significands adder).
We describe the development of the detection and correction scheme in a systematic
way. Since this description has much in common with the description of the basic LOP,
we also include the latter.
The proposed LOP operates for the general case in which the output of the adder
can be positive or negative. A version for the case in which the operands to the adder are
previously compared so that the result of the subtraction is always positive is described
in [1].
Our approach (Figure 2c)) to the basic LOP is similar to that of [19], extended to
the case in which the output of the adder can be positive or negative. It is based on
the location of some bit patterns producing the leading-one and the binary coding of
its position by means of a binary tree. Moreover, we include another pre-encoding and
trees to detect the occurrence of an error in the basic LOP. The output of these trees is
then used to correct the output of the LOD so that the correct shift is performed. Since
the detection and correction can be performed before the last stage of the normalization
shift, the delay of the addition is not increased.
Since almost all the floating-point processors [6, 10, 13] use the IEEE standard [15],
we consider the case of sign-and-magnitude representation of the operands.
The paper is organized as follows. In Section 2 the structure of the LOP is presented.
After that, the different modules of the LOP are described: the leading-one position
encoding in Section 3 and the concurrent position correction in Section 4. Then, in
Section 5, our design is evaluated and compared with other LOPs. Finally, in Section 6,
the effect on the floating-point addition latency is discussed.
GENERAL STRUCTURE
We now give an overview of the structure of the leading-one predictor we propose. Then,
in the following sections we consider individual modules. As stated in the introduction,
the two significands are in sign-and-magnitude and the LOP is only applicable when
the effective operation is a subtraction. As shown in Figure 1b), the LOP predicts the
position of the leading-one in the result, in parallel with the subtraction of significands.
The LOP operates on the significands after alignment. We denote by A = a
being a 0 and b 0 the most significant bits
and the operation to be performed by the
magnitude adder is jA \Gamma Bj 3 . We develop a LOP for the general case in which either
3 Note that we consider only the positive aligned significands and do not consider the signs of the
floating-point operands
correct
AND
NORMALIZATION
for Concurrent
Correction
for Leading-One
Encoding
string of symbols
(n symbols)
string of 0s and 1s
(n bits)
DETECTION
Vlog n bits
CORRECT.
MODULE
COARSE
| A - B | (from adder)
normalized
Figure
3: General structure of the proposed LOP
B. This is in contrast with the simplified LOP considering only A - B,
which was proposed in [19] and for which we added the concurrent correction in [1].
This extension is necessary because the LOPs described in [19] and [1] are suitable only
for floating-point adders where the operands are swapped to obtain always a positive
result in the case of an effective subtraction. As discussed further in Section 5, this
is effective only for single-datapath floating-point adders which have a comparator in
parallel with the alignment shift. In contrast, the LOP we propose can be incorporated
also in floating-point adders which swap the operands depending only on the exponent
difference. In these cases, the result of an effective subtraction may be negative when
the exponents are equal.
As shown in Figure 3, the LOP is divided into two main parts: the encoding of
the leading-one position and the correction of this position. Moreover, these parts are
composed of the following components:
Encoding
ffl A pre-encoding module that provides a string of zeros and ones, with the
defining the leading-one position. After this leading one, it
is immaterial what the rest of the string is.
ffl An encoding tree (also called leading-one detector (LOD) to encode the position
of the most-significant 1 into a
to drive the shifter for normalization. In addi-
tion, the bit V indicates when the result is 0.
Correction
ffl A pre-encoding module providing a string of symbols that is used to determine
whether a correction is needed. As indicated in Figure 3, there is significant
commonality between both pre-encoding modules.
ffl A detection tree to determine whether the position indicated by the encoding
tree has to be corrected (incremented by one)
ffl A correction module that performs the correction, if necessary, in parallel with
the operation of the barrel shifter.
In Sections 3 and 4 we describe these two parts and the corresponding modules and trees.
3 POSITION ENCODING
3.1 Pre-encoding module
As indicated in Figure 3, this module produces a string of zeros and ones. As a first
step in the production of this string we perform a radix-2 signed-digit subtraction of the
significands. We obtain
This operation is done on each bit slice (without carry propagation), that is
for clarity, the \Gamma1 will be represented as 1.
We now consider the string W to determine the position of the leading one. For-
mally, the determination of this position requires a conversion of the signed-digit representation
into a conventional binary representation. However, as we see now this conversion
is not actually required.
To simplify the discussion we consider separately the cases W ?
The notation used throughout the paper is the following: x denotes an arbitrary
substring and 0 k , 1 k , and 1 k denote strings of k 0's, 1's, and 1's, respectively, with k - 0.
The alternative situations for the location of the leading one are described in the diagram
of
Figure
4. The last level of the diagram indicates all the possible combinations of W
in radix 2 signed-digit and non-redundant binary representations, together with the
location of the leading-one. Since W ? 0, the first digit w i different from 0 has to be
equal 1. Therefore, the top of the diagram shows the w string 0 k 1(x). For the substring
(x) two situations can be identified as follows:
S1 The digit of w following the first 1 is either 0 or 1. In this case, the leading one is
located in position k 2. This is shown by considering two cases (see
1. The digit following the first 1 is 1. That is,
Clearly, the conversion of W to conventional representation has a 1 in position
k+1 since any borrow produced by a negative x is absorbed by the 1 at position
k+ 2. In this situation a leading one in position i is identified by the substring
2. The digit following the first 1 is 0. That is,
Now, two possibilities exist with respect to x, namely,
ffl (x) is positive or zero. The position of the leading one is k + 1, since there
is no borrow from (x).
ffl (x) is negative. The position of the leading one is because of the
borrow produced by (x). That is,
The problem with this situation is that it is not possible to detect it by
inspecting a few digits of w since it depends on the number of zeros before
the - 1. Consequently, we assume that the position is k+1 and correct later.
The leading 1 in position i is identified by the substring
(0;
@ @
ae
ae
ae
ae
ae
ae
ae
ae
ae
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z ~0kj
(0;
@ @
ae
ae
ae
ae
ae
ae
ae
ae
ae
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z ~0k
\Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega
positive
negative
or
zerok
0(x)\Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega
positive
negative
or
zerok
+Correction
Correction
Assume
position
+Patterns
1),
Assume
position
+Patterns
1),(1
Figure
4: Bit patterns for W ? 0
Table
1: Leading-one position for W ? 0
Bit Pattern Leading-one Position Substring
First
10fpositive or zerog First
First 0 after the
Last 1 of the string k
0fpositive or zerog Last 1 of the string k
First 0 after the last
* correction needed
In summary, for S1 the leading one in position i is identified by the substrings
S2 The first 1 of W is followed by a string of - 1s. That is,
If the string of - 1s is of length j the position of the leading 1 is
depending on a similar situation as in S1. Consequently, using the same approach
we assume that the position is k correct later. A leading one in position
i is identified by substrings
This discussion is summarized in Table 1. By combining the S1 and S2 cases, the
leading-one position in determined by the substrings,
Case
The same analysis can be extended to determine the leading one position when W ! 0.
This is achieved by exchanging the role of 1 and - 1 in the W ? 0 case. Therefore, the
leading-one position is identified by the following substrings,
Case
In this case there is no leading one. The encoding tree will provide a signal indicating
this situation. Therefore, it is immaterial what the encoding is.
3.1.1 String to identify the leading-one position
We now produce the string of zeros and ones which has as first one the leading-one. We
call string. The corresponding bit of F is obtained by combining the
substrings described before. To simplify the description of F , the values of digit w i equal
to 1, 0 or 1, are now called respectively. That is, for each bit position of the
input operands, the following functions are defined:
With this notation the substrings are,
ffl For W ? 0,
ffl For
Figure 5a) and 5b) show examples of the computation of F (pos) and F (neg) according
to equations (2) and (3).
It would be possible now to use both strings F (pos) and F (neg) to encode the
position of the leading one in separate LODs and to choose between them when the sign
is known. However, it is more efficient to combine both strings and have a single LOD 4 .
The simplest way to combine them would be to OR the two expressions. However, this
produces an incorrect result because, for instance, a 1 of f i (neg) can signal a leading-one
position that is not correct for a positive W . An example of this is given in Figure 5c).
Because of the above-mentioned problem we use also w i\Gamma1 in the substrings that
are ORed to produce the combined F . From Figure 4 we see that the substring w i w
identifies the leading one only when w Similarly, the substring w i w
identifies the position when w Consequently, the extended expression is
e
In contrast, as will be discussed in Section 4, we will use two separate strings for the detection of
the pattern for correction.
B:
A:
W:
s
W:
F:
c)
a) F(pos) b) F(neg)
d) Combined F
B:
W:
A:
s
W:
s
Figure
5: Computation of the intermediate encoding
Similarly, for the negative string
Combining both equations we obtain
This can be transformed to
e
An example of the calculation of string F is given in Figure 5d).
Note that for the case We postpone the description of
the implementation of this module until we have discussed also the pre-encoding for the
concurrent correction, since these modules share components.
Figure
Design of an 8-bit LOD. a) Tree implementation. b) Logic structure of the
4-bit LOD block
3.2 Encoding tree
Once the string F has been obtained, the position of the leading one of F has to be
encoded by means of a LOD tree [14, 19]. Figure 6a) shows the structure of an 8-bit
LOD, following the scheme and notation described in [14]. Bit V of each LOD block
indicates if there is some 1 in the group of bits under consideration in such block, and
P encodes the relative position of the 1. As an example, Figure 6b) shows the logic
structure of a 4 bit LOD block (block LOD-4 in the tree). Note that the logic structure
of the LOD-8 block will be similar, with the multiplexer having 2 bit inputs and a 3 bits
output. The relative position of the 1 inside each group is obtained by concatenation of
depending if the 1 is the block of V 0 or V 1, respectively. The
final P encodes the position of the leading one.
In case we obtain the final This indicates that a shift of n
bits should be performed.
4 CONCURRENT POSITION CORRECTION
As explained in Section 3, the position of the leading one predicted from the input
operands has one bit error for the following patterns of W :
1.
A:
tree tree
positive
Pre-encoding logic
correct
negative
correction
G
G
Positive encoding Negative encoding
for concurrent correction
is present
pattern
is present
pattern
is present
pattern
negative
positive
Figure
7: Detection of a correction pattern
2.
In these cases the position has to be corrected by adding 1 to the encoding calculated
in the tree. Therefore, the concurrent position correction has two steps: (1) detection
of when it is necessary to correct and (2) correction of the position encoding. The first
step is carried out in parallel with the leading-one encoding and the second one with the
normalization shifting.
4.1 Detection
Figure
7 shows the general scheme for the detection of a correction pattern. As explained
before the detection is performed by two modules: a pre-encoding module and a detection
tree. In the pre-encoding logic, two different strings are obtained: G p is used to detect
the presence of a positive correction pattern (case W ? to detect a negative
correction pattern (case W ! 0). As done for the position encoding of the previous
section, it is possible to combine both strings and have one tree to detect both types of
patterns. However, we have found that this complicates substantially the tree, so that
we have opted by using two different trees: G p is processed by the positive tree and G n
by the negative tree. We now describe these modules.
Table
2: Relation between W and a) G p and b) G n
otherwise z
a)
otherwise z
4.1.1 Pre-encoding module
For this pre-encoding we use the W string obtained before. Two new encodings are
constructed to carry out the detection, G p to detect a positive correction pattern, and
G n to detect a negative one. In both cases, it is necessary to distinguish between the
digit values 1 and - 1. Therefore, digits in the string G p and G n can take values f\Gamma1; 0; 1g.
To simplify the notation, we use n, z and p for - 1, 0 and 1, respectively. Specifically, for
the positive case, to detect the two patterns of W we construct the string G p and detect
the pattern
Similarly, for G n we detect the pattern
Let us consider G what we need to detect is the pattern
consisting of the leading one followed by zeros and terminating in a - 1, we do as follows:
ffl use the F (p) string described by expression (2) of Section 3. This will give us the
leading one followed by zeros.
ffl for the combination w This will give us the position of
the - 1.
The resulting relation between substrings of W and digits of G p is shown in Table 2a).
Note that for substrings w and n of G p are
set. According to the previous discussion, these cases have to be interpreted as n. This
interpretation is performed by the positive detection tree.
Figure
8 indicates for each pattern of W the corresponding string G p . As can be
seen, G p has the pattern z k pz q n only for the cases in which correction is needed.
a)
correction needed
c)
d)
correction needed
f)
Figure
8: Patterns in the string G p for W ? 0 case.
In a similar way, the pre-encoding G n is obtained. Table 2b) shows the relation
between W and the digits of G n .
4.1.2 Implementation
The pre-encoding module implements the expressions for F , G p , and G n , namely
ffl For F
ffl For G p
ffl For G n
The implementation is shown in Figure 9.
4.1.3 Detection Tree
To detect if one of the two patterns for correction is present a binary tree can be used,
being the input to the tree the intermediate encoding G. However, if a single tree is
used to detect the positive pattern (pz k n) and the negative one (nz k p), the number of
values of each node of the tree would be large, resulting in a complex and slow hardware
implementation. Therefore, we propose to use two different trees, one to detect the
positive pattern (positive tree) and other to detect the negative pattern (negative tree).
As shown in Figure 7, these two trees operate in parallel, but if one of the patterns is
present, only the corresponding tree will detect it.
Positive Tree
The positive tree receives as input the string G P and has to detect if the pattern z k pz q n(x)
is present. A node of the tree has five possible values, Z, P , N , Y , and U , representing
a
Figure
9: Implementation of the pre-encoding logic
the following substrings:
where Y indicates that the pattern has been detected and U indicates a string incompatible
with the pattern.
Each node of the tree receives as input the output from two nodes of the preceding
level and produces the combined value. Figure 10a) illustrates how the nodes of different
level are combined and Table 3a) shows the function table of a node of the tree. The left
input of the node is represented in the first column of the table and the right input in
the first row. Output Y is the result of the combination of a left P and a right N value.
Once the Y value has been set, the combination with any other right input results in
Table
3: Node functions a) for the positive tree and b) for the negative tree
Z
U U U U U U
a)
Z
U U U U U U
Y , since once the string has been found in the most-significant digits of the string, the
least-significant digits have no effect.
Figure
10b) shows an example of the detection of the pattern. In this case, the
pattern z 5 pz 8 n (x) is present in the string and the result is that the position has to be
corrected (value Y ).
Note that, if the first digit different from z in G p is n, that is, we are examining a
negative W string with the positive tree, then the value obtained as output in the last
level of the tree will be N .
For a simple implementation we encode the five values with four variables and assign
code 0000 to value U . With this encoding, the logic equations are,
input and the
right input, respectively.
Negative Tree
The negative tree is obtained by exchanging the role of P and N in the positive tree. It
receives as input the G n string. The node function is shown in Table 3b).
Similarly to the positive detection tree, if a positive W string is processed, the final
value obtained is P .
String G p least-significant
digits
node node
Level
Level i
a)
Y
Z P Z N N
z z
Z
z z
Z
z z
Z
left right
Z
z z
Z
z z z p
z z
Z
z z
string detected
most-significant
digits
Figure
10: Binary tree to detect the correction pattern
Implementation
The hardware implementation of the nodes in the positive tree (equations (7)) and the
negative tree is shown in Figure 11.
4.2 Correction of the normalization shift
The last step in the LOP we propose is the correction of the leading-one position. The
correction is done by incrementing by one the shift amount. As done in [19], to reduce
the delay of the shifter it is convenient to decode the shift amount in parallel with the
adder (if there is sufficient time). Moreover, because of implementation constraints, the
shifter has more than one stage. As shown in Figure 12, the stages are organized from
the coarsest to the finest. This last one performs a shift by one of several contiguous
positions, say from 0 to k f binary positions. As indicated in the Figure, we perform
the correction at this last stage, so that the shifter has to be modified to shift from 0
to positions. This should have a negligible effect on the delay of the last stage.
Notice that the selection between correction and no-correction can be made in parallel
with the previous stages of the shifter.
Z
Y P N
Z
Figure
Hardware implementation of a tree node
5 EVALUATION AND COMPARISON
In this section the LOP architecture we propose is evaluated in terms of delay of the critical
path and added hardware complexity. Then, we compare it with implementations of
the two schemes discussed in Section 1.1, namely the LOP without concurrent correction
and the LOP with concurrent correction based on carry checking.
5.1 Evaluation
To evaluate the LOP, we carry out a timing analysis of the architecture, where the critical
path of the addition and normalization shift for bits is calculated. Moreover, we
estimate the additional hardware needed for the concurrent correction.
Timing Analysis
We estimate the delay of the differents blocks of the architecture using as unit the delay
of a simple gate (2-input NAND). These delays are summarized in Table 4a). Some of the
estimations are obtained from [19], where a thorough comparison is performed between
the LOPs described in [19] and [4].
The delay of the pre-encoding logic F , G p and G n has been calculated according
the hardware implementation proposed in Figure 9. To compute the delay of the LOD
and of the detection tree, we have considered trees with six levels. The first level of the
decoder decoder
ls bits
decoder
partial shift
f
Leading-one position
ms bits
unnormalized |A - B|
first stage
of the shifter
of the shifter
second stage
last stage
of the shifter
normalized |A - B|
correction
(from
correct
tree)
detection
shift
partial shift partial shift
Figure
12: Concurrent correction of the leading-one position
LOD is composed only of NOR gates and the delay of the remaining levels is determined
by a 2-input multiplexer. However, for levels 3, 4, 5 and 6 the control input to the
multiplexers are known in advance, before the inputs are obtained from the previous
level [14]. Therefore, the delay of those levels can be estimated in 2 t nand each. This
results in a total delay for the LOD of 12 t nand
The delay of each level of the detection tree is determined by a two-level NAND-
NAND network (see Figure 11). Note that the Z output has a load of 7 gates; however,
the load of the slowest path in the node, the Y output, is only of 1 gate. Therefore, the
load of Z does not affect the global delay of the node.
We consider that the normalization shift is carried out in two stages, coarse shift and
fine shift, each operating on three bits of the shift amount. Consequently, each shifter is
implemented by 8-input multiplexers. Moreover, buffers are needed at the control input
of the shifters, due to the heavy load of those lines.
Figure
13a) shows the general structure and the delay of each of the parallel paths
in the adder, LOP, and shifter. Note that the slowest path goes through the adder
Table
4: a) Delay of the basic components of the LOP and b) Gate count of the LOP
buffer 3
2-input MUX 3
Pre-encoding F 6
Pre-encoding G p , G n 4
Detection tree 12
Shift correction 3
Adder
Coarse shifter 5
Fine shifter 5
Values obtained from [19]
a)
ELEMENT Gates
Pre-encoding F 650
Pre-encoding G p and G n 320
Detection tree 1000
Shift decoding
correction
(34 t nand ), whereas the path through the pre-encoding F and LOD tree has almost the
same delay, 33 t nand and the path through the detection tree has a lower delay. It has to
be pointed out that the concurrent correction is out of the critical path.
Hardware components
To evaluate the hardware complexity of the concurrent correction we include only the
components of the LOP. The estimation includes only the active components (gates) and
not the interconnections. Table 4b) summarizes the total count of logic gates for
bits.
The gate count of the pre-encoding logic F , G p and G n has been obtained from
the implementation shown in Figure 9. The count for the logic F includes all the gates,
except those gates that are exclusive to compute G p and G n . Therefore, for each bit we
consider that 12 gates are devoted to compute F and 6 to compute G p and G n .
As said before, the LOD and the detection tree are composed of dlog 2
being n the number of bits of the significands. The LOD has different modules for
each level of the tree. Thus, each module, except in the first level, is composed of an
gate and a 2-input multiplexer; however, the number of bits of the multiplexer
inputs depends on the level. For 54 bits, the total number of 2-input multiplexers is 49.
Modules in the first level do not have a multiplexer. The number of gates of each level
COARSE
SELECT.
ADDER
ADDER
c)
PRE-ENCOD.
ADDER
DETECT.
CORRECT.
CARRIES
PRE-ENCOD.
a)
LOD
and
COARSE
control lines
buffer
COMP.
COARSE
LOD
and
PRE-ENCOD.
LOD
and
CORRECT.
adder fine
buf.
detection tree
G
F buf.26LODcoarse5
dec.
shift
corr.
a)
adder
F buf.26LOD
coarse fine5
shift
comp.buf.
dec.
shift
adder fine
coarse
F6prefix tree7
finecarries
22 7corr. buf.
select carry
dec.
coarse
buf.
dec.
c)
Figure
13: General structure and critical path delay of a) LOP with concurrent correction
based on detection tree, b) LOP without concurrent correction and c) LOP with
concurrent correction based on carries checking.
of the detection tree has been derived from the implementation in Figure 11. We have
included also the gate count of the shift decoding an the shift correction.
5.2 Comparison
In this section we compare our LOP architecture with concurrent correction with two
other LOP alternatives: a LOP without concurrent correction and a LOP with a correction
scheme based on the utilization of the addition carries. We use the same estimates
of module delays and number of gates for all three schemes. Their main characteristics
are the following:
1. LOP without concurrent correction (Figure 13b)). This is an extension of the
one described in [19] to the case in which the output of the adder can be either
positive or negative. Since the LOD determines the position of the leading-one
within an error of one bit, a compensation shift is included once the normalization
has been performed. We have estimated the delay of the compensate shifter to be
of 2 t nand , in addition to the delay of the buffer required for the shift control. The
corresponding delay diagram is shown in Figure 13b).
2. LOP with concurrent correction based on carries (Figure 13c)). As discussed
in [4, 17], the error in the leading-one position can be detected by checking the carry
in the corresponding position. Therefore, for this scheme, the carries from the adder
have to be calculated explicitly and the corresponding carry selected according to
the output of the LOD. To accomplish this selection it is preferable that the LOD
output consist of a string of 1s followed by 0s. Therefore, the LOD is implemented
by a prefix tree and the carry selection is performed by a set of 2-input AND gates,
followed by an gate. Because of the characteristics of the LOD output, the
delay of the fine decoder is larger than in the other schemes; however, it is not in
the critical path. Note that the carry selection and shift correction can be done in
parallel with the coarse shifter. Because of this, it is convenient to reduce as much
as possible the delay of the fine shift. To accomplish this, as done in [4], the coarse
shifter is hexadecimal. Figure 13c) shows the corresponding timing diagram.
Table
5 summarizes the critical paths of the three schemes and shows that, with our
estimations, the LOP with concurrent correction presented here results in a reduction of
about 13%.
The estimation of the number of gates in the LOP of the two schemes we are
comparing with is given in Table 6. The total gate count for the three LOPs is summarized
in
Table
7. The LOP with concurrent correction presented here has the larger number of
gates: it almost doubles the gate count of the LOP without concurrent correction and it
is approximately 10% larger than the gate count of the LOP with concurrent correction
Table
5: Comparison of the critical path delay
LOP without LOP with concurrent
Our LOP concurrent correction correction based on carries
Critical path delay 34 t nand
Improvement 13 % 13 %
Table
Gate count for the a) LOP without concurrent correction and b) LOP with
concurrent correction based on carry checking
ELEMENT Gates
Pre-encoding F 650
Coarse shift decoding 40
Fine shift decoding 120
Compensate shift 160
a)
ELEMENT Gates
Pre-encoding F 650
Prefix Tree 1100
Carry selection 110
Coarse shift decoding 40
Fine shift decoding 120
correction
based on carry checking. However, this gate count is a small part of the number of gates
of the floating-point adder.
5.3 Actual implementations
To put in perspective the comparison estimations we have presented, we now briefly
summarize actual LOP implementations recently described in the literature.
In [19] the implementation of a floating-point adder using LOP without concurrent
correction is presented. It includes the compensate shifter after the normalization. The
floating-point adder was fabricated with 0:5 -m CMOS process technology with triple
metal interconnections. The main difference with respect to LOP without concurrent
correction we have described in section 5.2, comes from the fact that in [19] the result
of the subtraction is always positive so that the pre-encoding logic can be simplified.
The delay of the LOP is 8 ns and the time devoted to the compensate shifter is 1 ns.
Therefore, it can be concluded that by incorporating the concurrent correction based on
a detection tree and, consequently, eliminating the compensate shifter, the delay of the
Table
7: Gate count for concurrent correction
LOP Gates
LOP with detection tree 2260
LOP without concurrent correction 1200
LOP with carry checking 2050
LOP could be improved to 7 ns.
In [5, 9] the implementation of the IBM RS-6000 floating-point unit, with 1 -m
CMOS technology using triple-level metal, is described. It incorporates a floating-point
multiply-add-fused unit and uses a LOP with concurrent correction based on carry
checking. The LOD has been designed to accommodate the partial-decode scheme used
in the shifters. The normalization shift is accomplished in two stages: the first stage
produces a hexadecimal shift and the second stage a fine shift of (0,1,2,3).
The LOD calculates only the hexadecimal position of the leading-one, as a string of 1s
followed by 0s, and then it is necessary to decode the binary position of the leading-one.
This is the configuration we have used for this kind of LOP in the comparison.
The LOP design is detailed in [4]. The leading-one position anticipation is carried-out
digitwise. That is, the input data is processed in blocks of 4 bits and it provides a
shift signal for each block. Those shift signals constitute the coarse shift. To obtain the
shift signals a prefix tree is used. It receives as input the generate (G, both inputs are
1), propagate (P, only one input is 1) and zero (Z, both inputs are 0) signals, which are
also used for adder. The output of the tree specifies the leading-one position inside a
group of four bits. These signals are used both for the control of the shifters and for the
selection of the carries.
6 FLOATING-POINT ADDITION LATENCY REDUCTION
We now consider the effect of the proposed concurrent correction on the overall latency
of the floating-point addition. This depends on the delay of the other parts of this adder
and, in a pipelined implementation, on the processor cycle time. We consider separately
the effect on the single-datapath and on the double-datapath implementations.
Latency reduction in single-datapath floating-point adders
Figure
14a) shows the block diagram of a single-datapath floating-point adder using a
LOP without concurrent position correction [10, 13]. The adder has been pipelined into
five stages, as done in [19]. The significand addition is performed in a one's complement
SIGNIF. SIGNIF.
EXP. EXP.
STAGES
MUX MUX MUXexponent
difference
correction
concurrent
detection
concurrent
and
without
sign of
exp. diff.
exp. diff.
ADDER ADDER
(one's comp.) (one's comp.)
1-bit shift
MSB
1-bit shift
sign bit
MSB
bit invert.
MSB
sign bit
shift correct.
control
sign A
add/sub.
RIGHT
bit invert.
control
mux
a)
OUTPUT ALIGNER
INCR.
EXP.
COMPENSATION
control bit invert.
OUTPUT ALIGNER
mux
Figure
14: a) Single datapath floating-point adder without comparison (we have included
concurrent rounding). b) Reduced latency single datapath floating-point adder without
comparison (with concurrent rounding and LOP with concurrent correction)
form [5]. This way, if the result is negative the recomplementation consists in a bit
inversion. Alternatively, a compound adder could be used [2, 16].
The fourth stage perfoms the rounding and normalization of the adder result (note
that the inverters to recomplement the result are also included in this stage). As shown
in the Figure, these operations can be performed in two parallel paths [19], one for the
inversion and massive left shift and the other for the 1-bit shift and the rounding. This
is possible because
1. The value computed in the significand adder can be negative only whe the exponent
difference is zero. In this case, a full normalization left shift may be needed but no
rounding is required. Consequently, the bit inverters can operate in parallel with
the rounding logic.
2. On the other hand, for an exponent difference larger than zero two situations can
occur:
ffl The two most-significant bits of the adder result are zero. This can occur only
when the exponent difference is one (or zero, but this case has been analyzed
separately). A normalization by a left shift of more than one bit is needed and,
after that, the result is exact and no rounding is required. The mux selects
the output of the shift module as the addition result.
ffl At least one of the two most-significant bits of the adder result is one. This
situation can occurs for any exponent difference. Note that a normalized adder
result is included in this situation. The maximum normalization shift is one
and the normalized result will have to rounded. This limited normalization
shift is carried out in the 1-bit shift module and then, the mux selects the
output of the round module as result.
In the case of an effective addition, rounding is always required, but there is never
a normalization left shift. However, a significand overflow can occur needing a one bit
right-shift normalization. This right shift is also carried out in the 1-bit shift module in
the
Figure
.
The adder with the LOP with concurrent correction is similar except that the
compensation shifter and the exponent incrementer in the fifth stage are eliminated.
This might permit to merge the two last stages, as shown in Figure 14b). Note that this
merging might not be possible without the concurrent correction because of the delay of
the compensate shifter and the exponent incrementer.
A similar scheme and latency reduction is obtained for the single datapath floating-point
adder with comparison [6, 19]. In this case, a comparator is included in the second
stage to assure that, in the case of an effective subtraction, the smaller operand is subtracted
from the larger one and, therefore, the result is always positive. The latency
reduction obtained for this kind of adder is analyzed in [1].
and rounding
addition/subtraction0
SIGNIF.
EXP.
SIGNIF.
EXP.
exponent
difference
sign bit
LOP without
correction
concurrent
effective subtraction
and rounding
1-bit shift
MSB
MSB
FAR CLOSE
exp. diff.
effect. oper.
eff. add.
sign of
exp. diff.
control
add/sub.
sign A
control
shift
eff. sub.
ADDER
COMPOUND
(bit
1-bit shift
bit inverter
RIGHT
bit inverter
COMPOUND
ADDER
alignment
1-bit shift
EXP.
INCR.
control
mux
OUTPUT ALIGNER
EXP.
INCR. shift
Compensate
Figure
15: Latency of the double datapath floating-point adder
Latency reduction in double-datapath floating-point adders
Figure
15 shows an double-datapath architecture [11, 12]. In it, the FAR datapath
computes effective subtractions with exponent differences larger than one as well as all
effective additions. On the other hand, effective subtractions with an exponent difference
equal or smaller than one are computed in the CLOSE datapath. The pipelining of the
adder into four stages has been derived by transforming the pipelined single datapath
floating-point adder without comparison of Figure 14 and considering the components
delays specified in [19] and in Section 5.
In both paths the addition/subtraction is combined with the rounding in the compound
adder [2, 11, 16]. This adder computes A selects one of
these to perform the rounding. Moreover, in the CLOSE datapath it is used to perform
the two's complement of the result just by a bit-wise inversion. In [11, 16] an array of half
adders is included in the compound adder of the FAR datapath to compute A
required for the rounding to infinity when there is a significand overflow. However, [2]
describes a modification that replaces this array by a right shift of the operands by one
position for an effective addition. This way, the result can be either normalized or with
one leading zero, which is the same situation as for subtraction. Consequently, the rounding
is performed in the same way for both addition and subtraction. The shift of the
operands is implemented by the right shifter and by the parallel 1-bit shifter, so that this
modification reduces the delay, since it eliminates de array of half adders.
In the FAR datapath to assure a positive result of the adder, the smaller operand is
complemented for effective subtraction. In this way, the result conversion is eliminated.
Moreover, in that path no full-length normalization is required, since the maximum left
shift of the result is one bit.
With respect to the CLOSE datapath, in the case of equal exponents, the result
can be negative. But, as there is no alignment right shift, the result is exact and no
rounding is necessary. In the case of a exponent difference equal to one, the maximum
alignment shift is one bit, so no complete alignment shifter is required. On the other hand,
both cases may require a full-length normalization shift. Therefore, the LOP is needed
only in the CLOSE datapath. Note that, as in this datapath the effective operation is
always subtraction, bit inverters for the smaller operand have been included inside the
compound adder. Since in the case of an effective subtraction between operands with
equal exponents the result can be negative, the negative result has to converted to a sign-
and-magnitude representation. The converted result is formed by bit-inverting output
A +B of the compound adder. That is,
In this way, the result conversion is reduced to a bitwise inversion at the output of the
compound adder.
Since both the 1-bit alignment shift and the 1-bit normalization shift have a small
delay this implementation has a shorter critical path than the single datapath case of
Figure
14.
To determine the influence of the LOP with concurrent correction on the latency,
we analyze the total delay of the two paths using LOPs with and without concurrent
correction. We see that, as expected, the elimination of the compensation shifter in the
CLOSE datapath reduces the total delay of this path. Then, as described below, there
is some flexibility to pipeline the two paths into several stages, so that the latency is
reduced.
Considering the blocks in the CLOSE and in the FAR paths, we see that both paths
have almost the same modules:
SIGNIF.
EXP.
SIGNIF.
EXP.
exponent
difference
COMPOUND
ADDER
(four levels)
ADDER
COMPOUND
sign bit
MSB
exp. diff.
effect. oper.
control
mux
FAR CLOSE
sign of
exp. diff.
eff. sub.
concurrent
LOP and
detection
control
add/sub.
sign A
eff. add.
control
shift
shift corr.
(bit
1-bit shift
bit inverter
1-bit shift SHIFT
RIGHT
COMPOUND
ADDER
(two levels)
EXP.
INCR.
MUX 1-bit shift
OUTPUT ALIGNER
bit inverter
Figure
Double datapath floating-point adder with reduced latency
CLOSE. Exponent difference and operand swapping, 1-bit right shifter (alignment),
compound adder, bit inverter, normalization left shifter, compensate shifter, mux
and output aligner.
FAR. Exponent difference and operand swapping, alignment right shifter, bit in-
verter, compound adder, 1-bit left shifter(normalization), mux and output aligner.
Then, the only difference between the two datapaths is the Compensate shift in the
CLOSE datapath. Consequently, if concurrent correction is used and the Compensate
shift is eliminated, both paths have the same delay, allowing a pipelining into four stages
with a smaller stage delay. It might be even possible to obtain a three stage pipeline, as
shown in Figure 16.
7 CONCLUSIONS
We have presented a Leading-One Prediction (LOP) algorithm and its implementation
to obtain an exact prediction of the normalization shift in floating-point adders. The
prediction permits to reduce the delay of the adder since, as the LOP is operating in parallel
with the adder, the normalization shift is known before the result of the significand
addition. The LOP algorithm presented here is general since it can operate with adders
in which the result can be positive or negative.
The predicted leading one position can have an error of one bit. Our approach
includes the logic necessary to concurrently detect when the prediction will be wrong
and to correct the normalization shift. This permits the elimination of the compensation
shifter required in the adders with a LOP that does not includes concurrent shift correc-
tion. Although the concurrent correction increases the number of gates required for the
LOP, this increase should not be significant since the LOP is only a small portion of the
overall floating-point adder.
The detection and correction logic operates in parallel with the LOP and the significant
adder, and it does not introduce any additional delay to the adder. This improves
the performance with respect to LOPs with concurrent correction based on the checking
of the carries of the significand adder, where the logic necessary to carry out the checking
introduces an additional delay. We have estimated that the delay of the significant
addition and normalization shifter is reduced by approximately 13% using our LOP algorithm
with respect to both LOP without concurrent correction and LOP with concurrent
correction based on carry checking.
This improvement can be used to reduce the latency of a pipelined floating-point
adder. We have shown that the latency of a single data-path adder can be reduced from
five to four cycles while maintaining ' the same critical path delay. Similarly, the latency
of a double datapath floating-point adder can be reduced from four to three cycles.
--R
Rounding in Floating-Point Addition using a Compound Adder
UltraSparc: The Next Generation Superscalar 64-bit Sparc
Design of the IBM RISC System/6000 Floating-Point Execution Unit
The SNAP Project: Design of Floating-Point Arithmetic Units
A variable Latency Pipelined Floating-point Adder
An Algorithmic and Novel Design of a Leading Zero Detector Cir- cuit: Comparison with Logic Synthesis
Computer Architecture.
An Improved Algorithm for High-Speed Floating-Point Addition
Design and Implementation of the SNAP Floating Point Adder.
--TR
--CTR
R. V. K. Pillai , D. Al-Khalili , A. J. Al-Khalili , S. Y. A. Shah, A Low Power Approach to Floating Point Adder Design for DSP Applications, Journal of VLSI Signal Processing Systems, v.27 n.3, p.195-213, March 1, 2001
Chi Huang , Xinyu Wu , Jinmei Lai , Chengshou Sun , Gang Li, A design of high speed double precision floating point adder using macro modules, Proceedings of the 2005 conference on Asia South Pacific design automation, January 18-21, 2005, Shanghai, China
Khalid H. Abed , Raymond E. Siferd, CMOS VLSI Implementation of a Low-Power Logarithmic Converter, IEEE Transactions on Computers, v.52 n.11, p.1421-1433, November | leading-one prediction;floating-point addition;normalization |
323337 | Parallel Complexity of Numerically Accurate Linear System Solvers. | We prove a number of negative results about practical (i.e., work efficient and numerically accurate) algorithms for computing the main matrix factorizations. In particular, we prove that the popular Householder and Givens methods for computing the QR decomposition are P-complete, and hence presumably inherently sequential, under both real and floating point number models. We also prove that Gaussian elimination (GE) with a weak form of pivoting, which aims only at making the resulting algorithm nondegenerate, is likely to be inherently sequential as well. Finally, we prove that GE with partial pivoting is P-complete over GF(2) or when restricted to symmetric positive definite matrices, for which it is known that even standard GE (no pivoting) does not fail. Altogether, the results of this paper give further formal support to the widespread belief that there is a tradeoff between parallelism and accuracy in numerical algorithms. | Introduction
strongly nonsingular
Mauro Leoncini Giovanni Manzini Luciano Margara
August 8, 1997
Parallel Complexity of Numerically Accurate Linear
System Solvers.
This work merges preliminary results presented at ESA '96 and SPAA '97.
Dipartimento di Informatica, Universit'a di Pisa, Corso Italia 40, 56125 Pisa, Italy, and IMC-CNR.
via S. Maria 46, 56126 Pisa, Italy. Email: leoncini@di.unipi.it. Supported by Murst 40% funds.
Dipartimento di Scienze e Tecnologie Avanzate, Universit'a di Torino, Via Cavour 84, 15100 Alessan-
dria, Italy. Email: manzini@unial.it.
Dipartimento Scienze dell'Informazione, Universit'a di Bologna, Piazza Porta S. Donato 5, 40127
Italy. Email: margara@cs.unibo.it.
Matrix factorization algorithms form the backbone of state-of-the-art numerical libraries
and packages, such as LAPACK and MATLAB [2, 14]. Indeed, factoring a matrix is
almost always the first step of many scientific computations, and usually the one which
places the heaviest demand in terms of computing resources. In view of their importance,
some authors have investigated the parallel complexity of the most popular matrix factor-
izations, namely the ( ) and (5) decompositions (see Appendix A for definitions
and simple properties). A list of positive known results follows.
decomposition is in arithmetic , whenever it exists, i.e., provided that the
leading principal minors of the input matrix are nonsingular (in this case we will
say that the matrix is ) [16, 18].We prove a number of negative results about practical (i.e., work efficient and
numerically accurate) algorithms for computing the main matrix factorizations. In
particular, we prove that the popular Householder and Givens' methods for computing
the QR decomposition are P-complete, and hence presumably inherently
sequential, under both real and floating point number models. We also prove that
Gaussian Elimination (GE) with a weak form of pivoting, which only aims at making
the resulting algorithm nondegenerate (but possibly unstable), is likely to be
inherently sequential as well. Finally, we prove that GE with partial pivoting is
P-complete when restricted to Symmetric Positive Definite matrices, for which it
is known that even plain GE does not fail. Altogether, the results of this paper
give further formal support to the widespread belief that there is a tradeoff between
parallelism and accuracy in numerical algorithms.
ffl2 1log
stable
Minimal
LU
PLU NC
QR A NC
LU QR
LU QR
PLU
O n
O n O n
O n n O
Not to be confused with the analogous LFMIS problem of graph theory, which is known to be P-complete
[10].
decomposition is in arithmetic for matrices with full column rank, since it
easily reduces to decomposition of strongly nonsingular matrices [16].
decomposition is in arithmetic for nonsingular matrices [7]. The algorithm
for finding a permutation matrix such that is strongly nonsingular builds
upon the computation of the Lexicographically First Maximal Independent Subset
of the rows of a matrix, which is in [3] .
5 factorization of an arbitrary matrix is in arithmetic [7]. A permutation
5 such that the leftmost submatrix of has full column rank,
can be found by computing LFMIS of sets of (column) vectors.
Unfortunately, none of the above algorithms has proved to be numerically accurate
with respect to a realistic model of arithmetic, such as double precision floating point.
Actually, finding a numerically stable NC algorithm to compute the (or ) decomposition
of a matrix can be regarded as one of the major open problems in parallel
computation theory [10].
That a positive solution to the above problem is not just around the corner is confirmed
by the negative results that can be proved of the algorithms in practical use for
computing the and decompositions. Already in 1989 Vavasis proved that Gaussian
Elimination with Partial pivoting (GEP), which is the standard method for computing
the decomposition, is P-complete over the real or rational numbers [20]. Note that,
strictly speaking, membership in P could not be defined for the real number model. When
dealing with real matrices and real number computations we then assume implicitly that
the class P be defined to include the problems solvable in polynomial time on such models
as the real RAM (see [17]). The result in [20] was proved by showing that a decision
problem defined in terms of GEP's behavior was P-complete. For parallel complexity
theory the P-completeness result implies that GEP is likely to be inherently sequential,
i.e., admitting no implementations . One of the authors proved then
that GEP is probably even harder to parallelize, in the sense that no (
implementation can exist unless all the problems in admit polynomial speedup [12].
In this paper we prove new negative results about the classical factorization algo-
rithms. We consider the methods of Householder's reflections and of Givens' rotations
to compute the decomposition of a matrix. The main use of the decomposition
is within iterative methods for the computation of the eigenvalues of a matrix and to
compute least squares solutions to overdetermined linear systems [9]. Moreover, both
Householder's and Given's methods (hereafter denoted HQR and GQR, respectively) are
potential competitors of Gaussian Elimination to solve systems of linear equations stably
in parallel. To date, the fastest stable parallel solver is based on GQR and is characterized
by ( ) parallel time on an ( ) processor PRAM [19], while both GEP and HQR run
in ( log processors. Also, GQR is especially suitable for solving
large sparse systems, given its ability to annihilate selected entries of the input matrix at
very low cost.
We also consider the application of Gaussian Elimination with Partial pivoting to
special classes of matrices and a weaker form of pivoting which we will call
R
pivoting (GEM). Under minimal pivoting, the pivot chosen to annihilate a given column
is the first nonzero on or below the main diagonal. Minimal pivoting is especially suitable
for systolic-like implementations of linear system solvers (see, e.g., [11], although it is not
called this way). Minimal pivoting can be regarded as the minimum modification required
for Gaussian Elimination to be nondegenerate on arbitrary input matrices.
We prove the following results.
1. HQR and GQR are P-complete over the real or floating point numbers. We exhibit
reductions from the NAND Circuit Value Problem (NANDCVP) with fanout 2.
In particular, what we prove to be P-complete is to decide the sign of a given diagonal
element of the upper triangular matrices computed by either HQR or GQR. Our
reductions seem to be more intricate than the simple one in [20]. This is probably
a consequence of the apparently more complex effect of reflections and rotations
with respect to the linear combinations of Gaussian Elimination. We would like to
stress that the P-completeness proofs for the case of floating point arithmetic apply
directly, and have been checked with, the algorithms available in the state-of-the-art
package Matlab using the IEEE 754 standard for floating point arithmetic. In other
words, the negative results apply to widely "in-use" algorithms.
2. We extend Vavasis' result proving that GEP is P-complete on input strongly non-singular
matrices. This class includes matrices which are important in practical
applications, namely the diagonally dominant and symmetric positive definite ones.
Note that plain Gaussian Elimination (no pivoting) is guaranteed not to fail on
input a strongly nonsingular matrix. However, since it is usually unstable, one still
uses GEP.
3. We prove that GEM is P-complete on general matrices.
4. We show that the known algorithm for computing a PLU decomposition of a
nonsingular matrix corresponds to GE with a nonstandard pivoting strategy which
only slightly differs from Minimal Pivoting. Also, we prove that GE with such a
nonstandard strategy is P-complete on input arbitrary matrices, which somehow
accounts for the difficulties of finding an algorithm to compute the PLU decomposition
of possibly singular matrices.
The results of this paper give further evidence of the pervasiveness of a phenomenon
that has been observed also by numerical analysts (from a more practical perspective).
Namely that there is a "tradeoff" between the degree of parallelism, on the one hand, and
nondegeneracy and accuracy properties of numerical algorithms, on the other [5].
The rest of this paper is organized as follows. In Section 2 we introduce a little notation
and give some preliminary definitions. In Section 3 we describe the key ideas that are
common to the P-completeness proofs for all the factorization methods considered in the
paper. In Sections 4 and 5 we address QR decomposition via Householder's reflections and
Givens' rotations, respectively. In Section 6 we prove our negative results about Gaussian
Elimination with Partial and Minimal pivoting. In section 7 we show a correspondence
between a known decompostion algorithm and Gaussian Elimination. We
conclude with some further considerations and open problems. In Appendix A we discuss
Preliminaries
F
column
minor
principal
row
row echelon
a A
x y x
A a a
A A A
I O I i j
O
x
A n
LU A L; U L
A LU A LU
A
PLU A
PLU
QR A Q; R Q
R A QR QR
the algorithms considered in this paper. In Appendix B we give some basic definitions
about the floating point number representation. Clearly, more material about these well
known algorithms and the computer arithmetic can be found in many excellent textbooks
(in particular, see [9]). Finally, we include one technical proof in Appendix C.
The notations adopted here for matrices and matrix-related concepts are the standard
ones (see [9]). Matrices are denoted by capital letters. The ( ) entry of a matrix
is referred to by either or [ ] . Vectors are designated by lower case letters, usually
taken from the end of the alphabet, e.g., , , etc. Note that the notation refers to a
vector, i.e., an 1 matrix, for some 1.
The -th row (resp., column) of a matrix is denoted by (resp. A of a
matrix is any submatrix of . A minor is any square submatrix of formed
by the same set of row and column indices.
The symbols and denote the identity matrix (with [
otherwise) and the zero matrix (such that [ respectively. The zero vector is
denoted using the symbol 0. The transpose of is the matrix such that = . is
usually denoted by . A matrix is orthogonal when = . A permutation matrix
is a matrix which is zero everywhere except for just one 1 in each row and column.
Any permutation matrix is orthogonal. The transpose of a (column) vector is the
vector , i.e., a matrix of size 1 , for some .
Let be a square matrix of order .
The decomposition of is a pair of matrices , such that is lower
diagonal elements, is upper triangular,
. For an arbitrary (even nonsingular) matrix the decomposition
might not be defined. A sufficient condition for its existence (and unicity) is that
be strongly nonsingular.
The decomposition of is a triple of matrices such that and are
as above, is a permutation matrix,
The decomposition is always defined but not unique.
The decomposition of is a pair of matrices , such that is orthogonal,
is upper triangular, and = . The decomposition always exists.
In all the above cases, if is , with , and when the factorization exists, we get
a matrix that is properly said to be in form (rather than upper triangular).
Its leftmost minor is upper triangular while its rightmost ( ) minor is in
general a dense submatrix. However, when no confusion is possible, we will always speak
of the triangular factor of a given factorization.
A detailed description of the algorithms considered in this paper can be found in
Appendix
A. The details, however, are not necessary to understand the common structure
of the reductions. Some details will be required in the proofs of Theorems 4.3 and 5.3,
which deal with the floating point version of the algorithms. Except for these, the
following general description is sufficient. It defines a class of matrix factorization
F
proper embedding
A
p3
Input
Output
3 A framework for reductions to
kk
A A
A A
k a a
a a i a a k
A
A A A A k n A k k
A
A A
O B
A A
O B
R A A
A A a
algorithms that includes, among the others, the classical QR algorithms and Gaussian
Elimination.
Let be the input matrix. The algorithms in bring to upper triangular form by
applying a series of transformations that introduce zeros in the strictly lower triangular
portion of , from left to right. The notation is usually adopted to indicate the
matrix obtained after 1 transformations and its elements are referred to by .
is zero for min . A transformation is applied during one of the algorithm.
Every algorithm satisfies the following properties.
In other words, if the first
entries in row are zero, then the first stages of do not modify row .
If column has complementary nonzero structure with respect to columns 1 1
(by which we mean
not affected by the first 1 transformations.
This is a property that we will call of of a matrix into a larger
matrix . be a matrix, with - of dimensions ( 1), and
let be the triangular factor computed by on input . Let be a matrix having
as a minor and suppose that, as a consequence of the repeated applicability of
and , the first 1 stages of algorithm on input only affects the rows that
identifies . Then contains as a minor. In other words, the factorization of
, viewed as a part of , is the same as the factorization of alone. Perhaps the
simplest example of proper embedding is =
Stage modifies the entry ( ) only if . In particular, it introduces zeros in
the -th column of without destroying the previously introduced zeros. In
view of this, and to avoid some redundant descriptions, in the rest of this paper we
will use the notation to indicate the submatrix of with elements from
rightward and downward.
Our P-completeness results are all based on reductions from the NANDCVP, a restricted
version of CVP (Circuit Value Problem) which we now briefly recall:
: the description of a -input boolean circuit composed entirely of
gates, and boolean values .
: the value ( ) computed by on input .
NANDCVP is P-complete, as reported in [10]. In order to simplify the proofs we will
further assume, without loss of generality, that each gate of has fanout at most two.
What we shall prove in this section is the following general result, which applies to any
factorization algorithm .
a b
a b
a
a
a
a
a
A
A
A
True False
3.1 Informal description
nand
duplicator
copier wire
N A XR
A
C A N
There is an encoding scheme of logical values and a log-space bounded transducer
with the following properties: given the description of a fanout 2
nand circuit and boolean inputs for , builds a matrix of
order such that, if = is the factorization computed by algorithm
, with upper triangular, then [ ] is the encoding of ( ).
We shall prove (Theorem 3.1) that the transducer exists provided that there are certain
elementary matrices with well defined properties. We will later show that such matrices
do exist for the algorithms considered in this paper.
Unfortunately, a formal description and the proof of correctness of the transducer will
require quite a large amount of details. In spite of this, the idea behind the construction is
easy, namely to repeatedly apply the proper embedding property. Hence we first describe
the reduction in an informal way and only afterward proceed to a formal derivation.
Moreover, we have actually implemented the transducer as a collection of Matlab m-
files. These and the elementary matrices for the floating point versions of Householder's
and Givens' QR algorithms (the most interesting and technical ones) are electronically
available through the authors.
Let and let and denote appropriate numerical encodings of the truth values
. We need three kinds of (square) elementary matrices for .
The first such matrix is the . It has [ apply
to compute the factorization of , we get the encoding of ( ) in the right bottom
entry of the upper triangular factor.
The second elementary matrix is the . It has [ . If we compute
an incomplete factorization of , i.e., apply all but the last transformation of to , we
get 0in the right bottom corner of the incomplete triangular factor.
The third elementary matrix is the or . It has [
compute the factorization of we get in the right bottom entry of the triangular factor.
Using these matrices as the building blocks we can construct a matrix that simulates
the circuit . The structure of is close to block diagonal, with one block for
each nand gate in the circuit . Duplicator blocks are used to simulate fanout 2 nand
gates, and wire blocks to route the computed values according to the circuit's structure.
As the factorization of a block diagonal matrix could be performed by independently factoring
the single blocks, a certain degree of overlapping between the blocks is necessary
to pass the computed values around.
To illustrate how the preceding scheme can work in practice, and to see where the
difficulties may appear, consider first the construction of a submatrix which simulates a
gate. The basic idea is to append a duplicator to a nand block as pictorially
shown in Figure 1 (left). The block is the dark gray area, the block is light gray,
and the white zones contain zeros. The right bottom entry of coincides with the top
left entry of . This is an example of proper embedding. Suppose has order . Then
after 1 stages of the encoding of ( ) is exactly where required, i.e., in the top
left entry of from where it can be duplicated. The light gray area in Figure 1 (right)
3.2 Elementary matrices
A N D W
by the time the algorithm starts
working on column
Ws
Ws
has not been modified by the transformations. The only changes in the submatrix still to
be triangularized occurred in the first row of . We already know that entry ( ) has
been modified properly, but this needs not be the case for the other entries, i.e., the black
colored ones in Figure 1 (right). For the simulation to proceed correctly, it is required
that the black entries store the rest of the first row of
. We cannot rely on their initial contents.
Figure
1: N-D matrix composition: effect of the first 1 stages
As a second example, Figure 2 pictorially describes how a block can be used to
pass a value to a possibly far away place. The block (the dark gray area in Figure 2)
is split across non consecutive rows and columns. More precisely, if is of order , the
top left dark gray area is intended to represent the principal minor of order 1 of .
As before, the white zones contain zeros, while the light gray area is of arbitrary size and
stores arbitrary values. This situation again represents a proper embedding, so that the
first 1 stages of on input (with the order of ) will result in the factorization
of the block. This implies that the (encoding of) the logical value initially in the top
left entry has been copied to the far away right bottom entry.
Figure
2: Splitting of a block
To get an idea of what a complete matrix might looks like, see Figure 3, where the
circuit computing the exclusive or of two bits is considered. The corresponding matrix
has four blocks, one block and four blocks denoted by different gray levels.
Note, however, that Figure 3 does not incorporate yet the solution to the "black entries"
problem mentioned above.
To prove the correctness of the transducer (in Theorem 3.1 below) it is convenient to
introduce a block partitioning of the elementary matrices defined in the previous section.
O
A
A
Nand matrix (N)
a b
c
c
c
I I I
O O O
I M O I O
I O
I
input output
input places
auxiliary
a b
AC
AC
R
x a a x
Figure
3: Circuit computing ( ) (left) and the structure of the corresponding
matrix (right).
Let denote one such matrix. We partition as follows
(1)
where the diagonal blocks , , and are square matrices and and also
diagonal (i.e., with only zeros outside the main diagonal). We will refer to and
as to the and submatrices and let and denote their order, respectively.
Note that an elementary matrix actually defines a set of matrices. In fact, we regard the
diagonal entries of as the . A particular elementary matrix is obtained
by filling the input place(s) with the encoding of some logical value(s).
We can now formally define the "behavior" of the elementary matrices with respect
to the factorization algorithm .
We let denote the order of , which in the
block partitioning (1), and set [ is the factorization
computed by , we have
where is the encoding of ( ). If, as often required in practice, overwrites
the input matrix, the value will replace , and this is the reason why we defined
in (1) as the output submatrix. The same remark applies to the other elementary
matrices. We also require that, for any real value , an
(0 0 ) can be defined such that Using the auxiliary
D D D
d
actual input
A
A
F
3.3 Construction and
Duplicator matrix (D)
a
a
a
Wire matrix (W)
a
a
A
D XD R
R
x z
a a x; z ; a
W XW R
R
x a a x
vectors we can solve the "black entries" problem outlined in Section 3.1. Intuitively,
by appending auxiliary vectors to the right of in (instead of simply zeros, as
in
Figure
1 (left)), we can obtain the desired values in the black colored entries of
Figure
(right). As we will see in the proof of Theorem 3.1, the initial zeros in the
auxiliary vector prevents the problem from pumping up in the construction of .
We let denote the order of , which in the
block partitioning (1), and set [ is the incomplete factorization
computed by , i.e., represents the first 2 transformations applied to by
, we obtain
We also require that, for any pair of real numbers and , an auxiliary vector
can be defined such that
We let denote the order of , which in the block
partitioning (1), and set [ is the factorization computed by
, we get
We require again that, for any real number , an auxiliary vector
(0 ) can be defined such that
Elementary matrices (including auxiliary vectors) exist for both Householder's and
Givens' methods and for Gaussian Elimination as well.
In this section we present our main result (Theorem 3.1) on the existence of a single reduction
scheme that works for any algorithm in . We still require a couple of definitions.
- Suppose that has input variables and let be the number of places (inputs
to the gates) where the variables are used. is the number of inputs counting
multiplicities. Let the gates of be sorted in topological order. Given a specific
assignment of logical values to the input variables we may then refer to the -th
as to the -th value from the input set that is required at some gate,
I
A
z -C AB @
z -C A
a
a
a
Theorem 3.1
Proof.
a b
a b
N D W
A
A n O n n C
A XR R
A
O n
C z z n
A N A A n O
A
A A A
A A g
A
has an input row at position
Let elementary matrices , , and be given for . For any fanout
nand circuit with input variables and any truth assignment to ,
we can build a square matrix such that the following holds
(a) has order , where is the number of gates in .
(b) If is the factorization computed by , then is the encoding of
(c) has a number of input rows which equals the number of actual inputs to
; the -th such row has either the structure (2) or (3) depending on whether the
-th actual input to enters the first or the second input of some gate.
(d) Any actual input affects only through one input place.
The construction can be done by using work space.
- When we say that , we intend that initially row
is either
or
where is the encoding of one of the actual inputs to .
(log
We prove the result by induction on . Let , for , be the actual
inputs to and let and be the encodings of and , respectively. The case
is easy. We only have to set = , with [
Property (b) follows from the definition of . Properties (c) and (d) are easily verified as
well. In particular, has exactly 2 input rows at positions 1 and 2, with structure (2)
and (3), respectively, and this clearly matches the number of actual inputs to . Finally,
the actual inputs affect only through the input places [
Now suppose that the number of gates in is 1, and let be a topological
ordering of the DAG representing . Clearly, all the inputs to are actual inputs to .
Let be the circuit with removed and any of its outputs replaced with , the first
input variable. Since has 1 gates, we may assume that can be constructed
which satisfies the induction hypothesis. To build we simply extend to take
into account. There are two cases, depending on the fanout of . We fully work out only
the case of fanout 1, the other being similar but just more tedious to develop (and for the
reader to follow) in details.
1. Let be the gate connected to the output of . Suppose, w.l.o.g., that provides
the first input to . By the induction hypothesis (in particular, by (c)) has the
following structure.
a
a
a
a b
a
I I
I I I
I
O O O
O O O
I I O
I I I
O O O
I I
A
y
Z z z Z Z Z
x
A
N N N O O O O O
N N N A O a O O O
N N N a ff
O O W W O W A A O
O O O X x x
W W W a a
y
O O O Z z z Z Z Z
A A w
O
A a
a ff W W
N N W a a
A
O
W W W a a
y
O Z z z Z Z Z
0has an input row at some position corresponding to the first input to gate
and is the encoding of . Note that, by property (d), the actual logical value
encoded by only affects the definition of through the entry ( ). Using
and and elementary matrices we define as follows
. The minor enclosed in boxes is a set of 1
( is the order of ) auxiliary column vectors for that we choose such
where is the matrix that factorizes . Observe that only the -th row of has
been modified by replacing ,
In what follow we regard as a block 8 10 matrix, and when we refer to the -th
row (or column) we really intend the -th block of rows (columns). Nonetheless,
is square, if is, with order plus the size of . Using part (a) of the
induction hypothesis we then see that It is easy to prove that enjoys
properties (b) through (d) as well. Assume has actual inputs. Since has
fanout 1 has 1 actual inputs and, by induction, has 1 input rows.
Now, by the above construction has exactly (
which proves (c). Property (d) also easily holds. To prove (b) we use the properties
of . By , the application of 1 stages of to only affects the first 3
(block of) rows. Thus (including its auxiliary vectors) is properly embedded in
and hence after the first 1 stages of we get
c
c
I
I I I
I I
I I
a a N N
A w
A
y
Z z z Z Z Z
A A N
A - d w
A
A n
A
A
O n N D W
O n
is the encoding of ( ). The submatrix enclosed in boxes is a
set of 2 auxiliary vectors for that we choose such that
where is the transformation matrix that triangularizes . Note that the entries
corresponding to the first elements of auxiliary vectors contain zero, as required. If
the first element (in the definition) of auxiliary vectors were not zero, we would be
faced with the additional problem of guaranteeing that the first 1 stages would
set these entries to the required values.
It is again easy to see that (including its auxiliary vectors) is properly embedded
in so that additional 1 stages of leads to
. The correctness now follows from the induction hypothesis and
property .
2. The full description of the fanout 2 case is definitely more tedious but introduces
no new difficulties. extends by means of an initial block, followed by a
block, followed by two blocks. Taking the partial overlappings into account,
it immediately follows that the order of is plus the order
of .
The construction of matrix can be done in space proportional to log by simply
reversing the steps of the above inductive process. That is, instead of constructing
and using it to build , which would require more than logarithmic work space, we
compute and immediately output the first in case of a
first gate with fanout 2) rows and columns. We also compute and output row (or the
two rows where the output of a fanout 2 gates has to be sent). All of this can be done in
space (log ) essentially by copying the elementary matrices , , and to the output
medium. The only possible problem might be the computation of , but this is not the
case. In fact, for any 1 , let ( ) be the number of fanout 2 nand gates preceding
gate in the linear ordering of . This information can be obtained, when required, by
repeatedly reading the input and only using (log ) work space for counting. It easily
follows from the above results that the index of the top left entry ( ) of the -th
block is
Hence will be either ( ) or depending on whether the value under consideration
is the first or second input to .
E2
a b ab
nn
qr
True False
c
9 9 9 102
4 Householder's QR decomposition algorithm
Theorem 4.1
Proof.
Theorem 4.2
Proof.
N a b
c
c
a
A n
H A A n n A QR
O n
O n
x
D QR R R R R R z R x
z x
is in P under both exact and floating point arithmetic.
is logspace hard for P under the real number model of arithmetic.
The Matlab program that implements the transducer is indeed log-space bounded. It
only uses the definition of the blocks and simple variables (whose contents never exceed
the size of ) in magnitude. No data structure depending on is required. Clearly, as it
is implemented using double precision IEEE 754 arithmetic, it can properly handle only
the circuits with up to approximately 2 gates.
In this section we prove that HQR is presumably inherently sequential under both exact
and floating point arithmetic. This is done by proving that a certain set , defined in
terms of HQR's behavior is logspace complete for P.
is the factorization computed by HQR, and
Note that by HQR we intend the classical Householder's algorithm presented in many
numerical analysis textbooks. In particular we refer to the one in [9]. This is also the
algorithm available as a primitive routine in scientific libraries (such as LINPACK's
[6]) and environments (like Matlab's [14])
We begin by the ready to hand result about the membership in P.
Follows from standard implementations, which perform ( ) arithmetic operations
computations (see, e.g., [9]).
According to the result of Section 3, to prove that is also logspace hard for P it
is sufficient to exhibit an encoding scheme and the elementary matrices required in the
proof of Theorem 3.1. As we will see, however, the floating point case asks for additional
care to rule out the possibility of fatal roundoff error propagations.
We simply list the three elementary matrices required by Theorem 3.1. For each
elementary matrix , the corresponding auxiliary vector is shown as an additional
column of . That the matrices enjoy the properties defined in Section 3.2 can be automatically
checked using any symbolic package, such as Mathematica .
It is the 9 10 matrix of Figure 4, where 1 1 are the encoding of logical
values (1 for and 1 for ) and is an arbitrary real number. Performing
8 steps of HQR on input
= is the arithmetization of ( ) under the selected encoding.
It is the 6 7 matrix shown in Figure 5 (left). Performing 4 steps of HQR on input
where and are arbitrary real numbers.
a
x
x z
x z
x
a
Theorem 4.3
Proof.
A
computed
is logspace hard for P under finite precision floating point arithmetic.
More precisely, to the best possible approximations of the blocks under the particular machine
arithmetic.
Figure
4: The block for HQR.
Figure
5: The and blocks for HQR.
It is the 2 3 matrix of Figure Figure 5 (right). Performing 1 step of HQR on input
for an arbitrary real number.
Applying a floating point implementation of HQR to any single block defined in Theorem
4.2 results in approximate results. For instance, we performed the decomposition
of the four matrices using the built-in function available in Matlab. We found that
the relative error affecting the computed encoding of ( ) ranged from a minimum
of 0 5 to a maximum of 3 . Here is the roundoff unit and equals
IEEE 754 standard arithmetic. These might appear insignificant errors. However, for a
matrix containing an arbitrary number of blocks, the roundoff error may accumulate to
a point where it is impossible to recover the exact (i.e., 1 or 1) result. Clearly, direct
error analysis is not feasible here, since it should apply to an infinite number of reduction
matrices. Our solution is to control the error growth by "correcting" the intermediate
results as soon as they are "computed" by nand blocks. Note that, by referring to the
values by a certain elementary matrix , we properly intend the non zero values
one finds in the last row of the triangular factor computed by HQR on input (including
the auxiliary vectors). Analogously, the input values to are the ones computed by the
elementary matrix preceding in .
We take duplicator and wire blocks as in Theorem 4.2, and provide a new definition
for nand blocks so that they always compute exact results. To do this, we have to
consider again the structure of , as resulting from Theorem 3.1.
and
a
a
a a
corr
corr
corr
corr corr
z -
z - z -B B @C C A
l C l
(1) (2) ( 1)
c
a x N N
x
N a
a N
x
Let be the -th gate in the topological ordering of , and let and be the gates
providing the inputs to . Let denote the block of corresponding to ,
according to the construction of Theorem 3.1. To prove the result we maintain the
invariant that the values computed by are exact. This is clearly
true 1. Using the invariant we first verify that the errors affecting the values
computed by can be bounded by a small multiple of the roundoff unit. We then use
the bound to show how to redefine so that it computes exact results, thus extending
the invariant to .
From the proof of Theorem 3.1 we know that the output of (and similarly of ) is
placed in one of the input rows of as a consequence of the factorization of possibly a
followed by a block. It follows that the error affecting the output of is only due
to the above factorizations to the factorization of itself. Since there is a limited
number of structural cases (depending on the fanout of gates and ) and considering
all the possible combinations of logical values involved, the largest error ever affecting the
output of can be determined by direct (but tedious) error analysis or more simply
by test runs. For the purpose of the following discussion we may safely assume that the
relative errors affecting the computed quantities are bounded by , for some constant of
order unit ( is actually smaller than 10). In other words, we may assume that the actual
outputs of are Recall that is the last entry
of the generic auxiliary vector ( ) of after the factorization (see the definition of
is Section 3.2). Here, however, we require that be a machine number (i.e., a rational
number representable without error under the arithmetic under considerations).
Having a bound on the error, we are now ready to show how to "correct" the erroneous
outputs. The new nand block, denoted by , extends with two additional rows and
columns, as shown below
positive integer (to be specified below). Note
that ( 1) is precisely that auxiliary vector for the old that produces 1 as output,
. The auxiliary vector for is (0
first requirement on is thus that the quantity be a computer number. As
the length of the significand of we see that a sufficient condition is that
the length of the significand of does not exceed 1. Now, let us apply HQR
to extended by its auxiliary vector. As is properly embedded in , after 8
stages of HQR we get (using the above result on the error)
A second condition on is that we want that 2 to get rid of the
error . An easy argument shows that this implies log . Thus, recalling the bound
planerot
nn
corr
corr
corr
F
a a
a
a
Theorem 5.1
Proof.
Theorem 5.2
I ;
x
x
x
G A A n n A QR
G
G
5 QR decomposition through Givens' rotations
is in P under both exact and floating point arithmetic.
is logspace hard for P under the real number model of arithmetic.
on and , we see that 5 is sufficient. As a consequence, the length of cannot
exceed 6. The actual reflection matrix applied to is then2
which, by easy floating point computation, gives
Applying one more stage now leads to the correct results and . The above requirement
on is by no means a problem. In fact, the auxiliary values ever required are the non zero
elements in the input rows of the blocks that possibly follow nand elementary matrices,
i.e., and blocks. These are simply 1, 2, and 5 4, all of which can be represented
exactly with a 3 bit significand.
The elementary matrices of Theorem 4.3 are available for the general transducer implemented
in Matlab. In particular, is defined
In this section we prove that the following set
is the factorization computed by GQR, and
is logspace complete for P. The way we present the results of this section closely follows
the metodology of Section 4. Here, however, we have to spend some more words about the
particular algorithm considered. In fact, the computation of the QR decomposition can
be done in various ways using plane (or Givens') rotations. Differently from Householder's
reflections, a single plane rotation annihilates only one element of the matrix to which it
is applied, and different sequences of annihilations result in different algorithms. By the
way, this degree of freedom has been exploited to obtain the currently faster (among the
known accurate ones) parallel linear system solvers [19, 15]. We also outline that there
is no QR algorithm available in Matlab (nor in libraries as LINPACK or LAPACK), but
it just provides the primitive that computes a plane rotations. The hardness
results of this section apply to the particular algorithm that annihilates the subdiagonal
elements of the input matrix by proceding downward and rightward. This choice places
GQR in the class defined in Section 2, with the position that one stage of the algorithm
is the sequence of plane rotations that introduce zeros in one column.
See, e.g., [9]. We only point out that the membership in P holds independently
of the annihilation order.
a
a
a
z
G
True False
Proof.
a b
Theorem 5.377777777
ff
x z
x z
x z
x z
ppp
is logspace hard for P under finite precision floating point
As in Theorem 4.2, we simply list the three elementary matrices extended with
the generic auxiliary vector. The matrices are shown in Figures 6 through 8, where
are encodings of logical values (1 for and 1 for ) and and
are arbitrary real numbers. Again, that the matrices enjoy the properties defined in
Section 3.2 can be verified with the help of a symbolic package.
Figure
The block for GQR.
Figure
7: The block for GQR.
Figure
8: The block for GQR.
We now switch to the more delicate case of finite precision arithmetic.
m22
True
e
Proof.
a
a
a
a
0d e
0d e
R
d; d R
e
a
x y
We apply the same ideas of Theorem 4.3. That is, we extend the definition of
so that it always computes the exact results. Here, however, we cannot reuse the
block adopted for the exact arithmetic case. There is a subtle problem that urges for a
different definition of . Let us see in details. If we apply a floating point implementation
of GQR to we clearly get approximate results (note that the matrices for GQR contain
irrational numbers). In particular, instead of 0
0 , in the bottom right corner of
we get (1
. Even if , , , and are of the order of the roundoff
unit , the fact that is not zero causes the whole construction to fail. Note that the
same kind of approximate results are obtained under HQR, but with no damage there. To
get to the point, suppose that is in column of and let us proceed by considering
stage of the algorithm. In HQR one single transformation annihilates the whole column
so that the contribution of a tiny to the -th transformation matrix is negligible. On
the other hand, in GQR the elements are annihilated selectively and, since is not zero,
one additional plane rotation is required between rows and + 1 to place zero in the
entry ( +1 ). Unfortunately this has the effect of making the element in position ( )
positive, which is a serious trouble since this entry contained the encoding, with a small
perturbation, of a truth value. The result is that, when the subsequent plane rotations
(the ones simulating the routing of the logical value) are applied, the value passed around
is always the encoding of , and the simulation fails in general.
We thus need to replace the duplicator with one that returns a true zero in the entry
( 1) of the incomplete factor , which will clearly exploit the properties of floating
point arithmetic.
noindent Let and denote the length of the significand and the largest exponent
such that 2 can be represented in the arithmetic under consideration, respectively.
For the standard IEEE 1023. The nonzero elements of the new
duplicator are only powers of 2. In this way any operation is either exact or is simply a
no operation.
As the new auxiliary vector we define
Nothe that the only possible assignments to and are 0 and 1 or 1 and 0.
The rest of the proof is now similar to that of Theorem 4.3. We show how to correct the
slightly erroneous values computed by an block assuming that the previous blocks
GEP
GEM
a
a
a
a
a
a
a a
a
a
corr
corr
corr
corr
corr
z -
z -B @C A
@A
6 Gaussian Elimination with Pivoting8
0d e
0d e 0d e
0d e 0d e
d e d e
a
a x a x N
G
x
x
return exact results. Let stand for the nand block adopted for the exact arithmetic
version of GQR (Figure 6). The new nand block is then
As the new auxiliary vector we take
i.e., the first 10 entries of ( ) coincide with ( 2 ). Now, let us apply GQR to
extended by its auxiliary vector. As is properly embedded in , after 9 stages of
GQR we get
where , for some small constant of order unit. The plane rotation to annihilate
the entry (2 1) of is represented by
and its application in floating point gives
The crucial point is that, if can be represented
with no more than 2 significant bits, the alignment of the fraction part performed
during the execution of 2 simply cause the contribute 2 to
be lost. Hence, the computed element in the entry (1 3) will be 2 2 . But then
one more rotation produces the exact values and in the last row. Note that the only
value required in place of is 1.
In this section we consider the algorithm of Gaussian Elimination with partial pivoting,
or simply , a technique that avoids nondegeneracies and ensures (almost always)
numerical accuracy. We also consider the less-known Minimal pivoting technique, ,
one that only guarantees that a PLU factorization is found. Minimal pivoting has been
adopted for systolic-like implementations of Gaussian Elimination [11]. A brief description
of these algorithms is reported in Appendix A.
We prove that GEM is inherently sequential, unless applied to strongly nonsingular
matrices while GEP is inherently sequential even when restricted to strongly nonsingular
matrices.
Theorem 6.1
The set is log-space complete for
6.1 Partial Pivoting
6.2 Minimal Pivoting
The proof we give here builds on the original proof in [20], and hence does not share
the common structure of the other reductions in this paper. Essentially we show that,
with little additional effort with respect to Vavasis' proof, we can exhibit a reduction
in which the matrix obtained is strongly nonsingular. As already pointed out, strongly
nonsingular matrices are of remarkable importance in practical applications. This class
contains symmetric positive definite (SPD) and diagonally dominant matrices, which often
arise from the discretization of differential problems. Observe that, on input such matrices,
plain GE (no pivoting) is nondegenerate, but it is not stable in general and hence is not
the algorithm of choice.
As in [20], that GEP is inherently sequential follws from the proof that the folllowing
set is P-complete.
strongly nonsingular and, on input , GEP uses row
to eliminate column .
We postpone the technical proof of Theorem 6.1 to the Appendix C, but give an
example that shows the way the matrix given in [20] is modified. Figure 9 depicts the
reduction matrix which would be obtained according to the rules in [20] on input the
description of the circuit of Figure 3. The matrix is nonsingular; however, it can be seen
that the leading principal minor of order 2 is singular. The matrix we obtain, according
to Theorem 6.1, is shown in Figure 10. It can be easily seen that our matrix is strongly
diagonally dominant by rows, and hence strongly nonsingular.
Figure
9: Matrix corresponding to the exclusive or circuit. The symbol stands for
a zero entry.
The technique of minimal pivoting, i.e., selecting as the pivot row at stage the first
one with a nonzero entry (below or on the main diagonal) in column , is probably
the simplest modification that allows GE to cope with degenerate cases. However, such
False True
A
Theorem 6.2
Proof.
a
b a
x
x
The set is logspace complete for under both real and finite precision
floating point arithmetic.
Figure
10: The matrix for the computation of ( ).
a simple technique is sufficient to make the Gaussian Elimination algorithm inherently
sequential. Note that, even if no formal error analysis is available for GEM, it is not
difficult to exhibit matrices (that can plausibly appear in real applications) such that the
error incurred by GEM is very large. Actually, GEM is likely to be as unstable as plain
GE.
Consider the following set.
is the factorization computed by GEM, and
The set is clearly in , as GEM runs in time ( ) under both models of arith-
metic. We first show that is also logspace hard for when the input matrices are
singular, and then show how to restrict the input set. As GEM belongs to the class , to
prove the hardness of we simply list the three elementary matrices required by Theorem
3.1. Note that the matrices are the same for both models of arithmetic, as the operations
performed by GEM in floating point are exact. The encoding of logical values here is 0
for and 1 for . The matrices are depicted in Figures 11 and 12.
Figure
11: The nand (left) and wire (right) blocks for GEM.
x
z
z
U
A
A
k A A
Figure
12: The block for GEM.
is clearly singular. Now consider the following matrix of order 2 , where
is the order of .
where is the matrix with 1 on the antidiagonal and 0 elsewhere. The determinant of
can be easily proved to be 1. Moreover, if = is the factorization computed
by GEM, then . Note then that what we prove to be P-complete is not exactly
, but a set with a little more complicate definition (which is still in terms of GEM's
behavior).
As usual, in the following the notation will be used to denote the matrix obtained
after 1 stages of GEM on input , and considering only the entries ( ) with .
However, by writing we intend the submatrix obtained after 1 stages of GEM
(on input ), and considering only the entries ( ) such that . With this
position, to prove that the output of can be read off entry ( ) of the factor of
, we show that the executions of GEM on input and on input result in identical
submatrices and , for 0 . The proof is by induction. Initially,
the equality follows from the definition of . Consider stage 1. If column of
contains a nonzero element below or on the main diagonal, say at row index , then the
selected pivot row is the -th under both executions. The results follows then from the
induction hypothesis and the fact that exactly the same operations are performed on the
elements of the submatrices. If no nonzero element is found in column of , then
stage of the first execution has no effect, and hence = . Under the second
execution (the one on input ) by construction the pivot is taken from row 2 + 1.
However, the pivot is the only nonzero element in row thus the effect
of this step is simply the exchange of rows and 2 + 1. But then once more
.
We conclude by observing that the set of Theorem 6.2 is clearly NC computable
when the input set is the class of strongly nonsingular matrices. In fact, in this case,
GEM and plain Gaussian Elimination behave exactly the same.
Theorem 7.1
7 On algorithms for the
l l
Computing the PLU factorization returned by GEMS on input a nonsingular
matrix is in arithmetic .
A n A
O
R
A
In this section we show that a known algorithm for computing a PLU decomposition
of a nonsingular matrix (see [7]) corresponds to GE with a nonstandard pivoting strategy
which is only a minor variation of Minimal pivoting. This result seems to be just a
"curiosity"; however, we can prove that the same strategy is inherently sequential on input
arbitrary matrices, which can be seen as a further evidence of the difficulties of finding
an algorithm to compute the PLU decomposition of possibly singular matrices.
The new strategy will be referred to as Minimal pivoting with circular Shift, and
the corresponding elimination algorithm simply as GEMS. The reason for its name is
that GEMS, like GEM, searches the current column (say, column ) for the first nonzero
element. Once one is found, say in row , a circular shift of row through is performed
to bring row in place of row (and the latter in place of row 1).
We consider the algorithm of Eberly [7]. Given , nonsingular of order , let
denote the matrix formed from the first columns of
the set of indices of the lexicographically first maximal independent subset of the rows of
since has full column rank. Moreover,
Note that the computation of all the is in (see [3]). Now,
. Then a permutation such that has
factorization is simply
where is the th unit (column) vector. Clearly, once has been determined, computing
the factorization of can be done in polylogarithmic parallel time using known
algorithms. We now show by induction on the column index that is the same permutation
determined by GEMS. The basis is trivial, since is the index of the first nonzero
element in column 1 of . Now, for 1, let
be the (partial) factorization computed by GEMS, where is upper triangular with
nonzero diagonal elements (since is nonsingular) and the unit vectors
extend to form a permutation matrix. Clearly, Minimal Pivoting ensures that
. Now, the next pivot row selected by GEMS is the one corresponding to
the first nonzero element in the first column of . Let the index of
the pivot row. Since Gaussian Elimination does nothing but linear combinations between
rows, it follows that the initial matrix satisfies
This in turn implies that = , i.e. that = .
nn
F
Theorem 7.2
Proof.
8 Conclusions and Open Problems
a x; z
a
is logspace hard for P.
except for the auxiliary vector od D
We now show that GEMS is inherently sequential by proving that the set
is the factorization computed by GEMS, and
is P-complete. Clearly, that is in P is obvious, so what remains to prove is the
following.
Once more GEMS is in the class . So we simply give the elementary matrices.
This is very easy. Everything is the same as in the first part of the proof of Theorem 6.2,
. The new definition for ( ) is
It is an easy but interesting exercise to understand why the second part of Theorem
6.2, which extend the P-completeness result to nonsingular matrices, here does not work
(we know that it cannot work, in view of Theorem 7.1).
The matrices corresponding, for both Householder's and Givens' algorithms, to a circuit
are singular, in general. More precisely, the duplicator elementary matrix is singular,
so that all the matrices that do not correspond to simple formulas (fanout 1 circuits)
are bound to be singular. All the attempts we made to extend the proofs to nonsingular
matrices failed. The deep reasons of this state of affairs could be an interesting
subject per se. To see that the reasons for these failures might be deeper than simply
our technical inability, we mention a result of Allender et al. [1] about the "power" of
singular matrices. They prove that the set of singular integer matrices is complete for
the complexity class C L . The result extends to the problem of verifying the rank of
integer matrices. Of course, our work is at a different level: we are essentially dealing
with presumably inherently sequential algorithms for problems that parallelize very well
(using different approaches). However, the coincidence suggests that nonsingular matrices
might not have enough power to map a general circuit. This is the major open problem
for the QR algorithms.
Also, for general matrices, it would be interesting to know the status of Householder's
algorithm with column pivoting, which is particularly suitable for the accurate rank determination
under floating point arithmetic.
For what concerns Givens' rotations, an obvious open problem is to determine the
status of other annihilation orderings, especially the ones that proved to be very effective
in limited parallelism environments [19, 15]. We suspect that these lead to inherently
sequential algorithms as well.
A M
--R
Inherently Seq.
NC5 A set is in C L provided that there is a nondeterministic logspace bounded Turing machine such that iff has the same number of accepting and rejecting computations on input
GEM NC GEMS NC Table 1: Parallel complexity of GE with different pivoting strategies and for different classes of input matrices.
As already mentioned
The complexity of matrix rank and feasible systems of linear equations
Fast parallel matrix inversion algorithms
Efficient Parallel Independent Subsets and Matrix Factorizations
Parallel Linear Algebra
Matrix Computations Limits to Parallel Computation Introduction to Parallel Algorithms and Architectures: Arrays Trees Hypercubes Journal of Computer and System Sciences
Theoretical Computer Science Computational Geometry
The Algebraic Eigenvalue Problem
On the Parallel Complexity of
Parallel complexity of Householder QR factorization
An alternative Givens ordering
Complexity of Parallel Matrix Computations
On Stable Parallel Linear System Solvers
GE computes the decomposition of (whenever it exists) by determining a sequence of 1 elementary transformations with the following properties:
pivoting strategy pivot row
GE with Minimal pivoting (GEM).
QR factorization via Householder's reflections (HQR).
GEP computes a decomposition of
HQR applies a sequence of 1 elementary orthogonal transformations to
a a a a a a
GQR applies to general real matri- ces
The rotation used to annihilate a selected entry of a matrix is the orthogonal matrix defined as follows:
A floating point system is characterized
In particular
Let denote the matrix corresponding to the circuit according to Vavasis' proof
Vavasis' proof is based on the observation that a NAND gate outputs unless one of its inputs is
Our matrix has order 3
Define the of the matrix to be the set of odd-numbered columns of the main submatrix
The proof of the theorem is now a consequence of the following two lemmas.
For the following facts hold of GEP on input
Figure 13: The structure of the matrix
Consider step 2 1 of GEP.
For the entries with row index larger than 2
By induction hypothesis we know that the first 2 2 elimination steps did not affect the auxiliary columns 2 2
In order to prove (d)
Suppose now that the pivot at step 2 1 is 4 0.
--TR | numerical stability;NC algorithms;inherently sequential algorithms;p-complete problems;parallel complexity;matrix factorization |
323803 | Redundant Radix Representations of Rings. | AbstractThis paper presents an analysis of radix representations of elements from general rings; in particular, we study the questions of redundancy and completeness in such representations. Mappings into radix representations, as well as conversions between such, are discussed, in particular where the target system is redundant. Results are shown valid for normed rings containing only a finite number of elements with a bounded distance from zero, essentially assuring that the ring is discrete. With only brief references to the more usual representations of integers, the emphasis is on various complex number systems, including the classical complex number systems for the Gaussian integers, as well as the Eisenstein integers, concluding with a summary on properties of some low-radix representations of such systems. | Introduction
Number representations have for long been a central
research topic in the field of computer arithmetic, since
choosing the wrong number system can have detrimental
effects on such aspects of computer design as storage effi-
ciency, accuracy and speed of operation. Designing a number
system amounts to choosing a representation suitable
for computer storage and transfer of elements of a set of
numbers, such that arithmetic operations can be performed
with relative ease on these, by merely manipulating their
representation.
number system has achieved the kind of wide spread
acceptance and popularity that the radix representations
have. Radix polynomials represent the elements of a set
by a weighted sum of digits, where the weights are integer
powers of the base or radix. This representation has the
advantage that each digit can be drawn from a small finite
digit-set easily encoded into machine states, and that arithmetic
algorithms can be broken into atomic steps operating
on individual digits. An important issue in the design of
number systems is the notion of completeness, i.e.,
does a given base and digit-set combination have the desired
effect of being able to represent all the elements of
the set of numbers in question. Equivalently the notion
Asger Munk Nielsen is with MIPS Technologies, Copenhagen, email:
asgern@mips.dk, and Peter Kornerup is with the Dept. of Mathematics
and Computer Science, Odense University, Denmark, email: ko-
rnerup@imada.ou.dk. This work has been supported by grant no.
5.21.08.02 from the Danish Research Council.
Manuscript received January 6, 1998, revised May 25 and October 7, 1998.
of redundancy is of importance, e.g., the presence of alternative
polynomials representing the same element,
has had a profound influence on algorithms and speed of
arithmetic operations in modern microprocessors. Redundancy
may allows parallel, constant time addition, and is
thus paramount to fast implementation of multiplication,
division and other composite computations.
As microprocessors become increasingly more complex,
the problems that can be solved in hardware likewise increase
in complexity. As an example we are at the point
where signal processing problems demanding fast and frequent
execution of arithmetic operations on complex numbers
can be solved by dedicated hardware [2]. It seems logical
to investigate alternative number representations for
these problems, addressing such issues as redundancy and
storage efficiency. Unfortunately assessing important questions
as completeness and redundancy, are no longer quite
as trivial tasks when we turn our attention to sets like complex
numbers. Answering these questions requires a fundamental
understanding of the underlying mathematical
foundation of radix polynomials. The goal of this paper is
to clarify some of these issues, while providing usable tools
for designing and evaluating number systems.
We will do this by using such well founded and widely
understood mathematical notions as rings, residue classes
and norms. This paper extends the work done by Matula
in [11; 12] to the general notion of commutative rings, and
gives an analysis of some previously discussed representations
of complex numbers, e.g. [9; 16; 4], but here emphasizing
redundant digit-sets for these representations. There
are a number of results on non-redundant representations,
also on the representation of complex numbers, e.g. [7; 5;
6; 1], but the question of redundancy beyond the usual
integer-based systems (e.g., as treated in [10]) has had little
treatment in the past. It is unavoidable that many of the
results included for completeness may seem well known, as
they are straightforward generalizations or formalizations
of known properties, possibly from the "folklore".
Section 2 introduces the notation and definitions of completeness
and redundancy of a digit-set, together with results
on these properties, as generalizations of results from
[11; 12]. Section 3 then discusses the determination of radix
representations, non-redundant as well as redundant, together
with an algorithm for determining whether a digit-
set is complete, i.e., capable of representing all elements
of the ring. Termination of these algorithms requires the
ring to be "discrete", in the sense that given a norm on the
ring, only a finite number of elements has norm less that
any given constant. This is likely to be satisfied for any
rings of practical interest, since most such rings have some
kind of lattice structure.
Section 4 then discusses mappings between radix representations
of different systems, in particular digit-set con-
versions. It is here shown that conversions are possible
with a finite carry-set under the same condition for termination
as above. For conversion into a redundant digit-set,
and thus addition as a special case, it is generally assumed
possible to do so in parallel and with limited carry propa-
gation, if only the digit-set is redundant and complete. It
is demonstrated that this is not the case in general, further
conditions on the regularity of the digit-set are needed.
In Section 5 some of the systems presented in the past for
representing complex numbers are then discussed, extending
these also into redundant representations. Section 6
then concludes with a summary of some of the properties
of practical concern for implementations of complex arithmetic
II. On the Representation of Numbers
This paper is devoted to the study of radix representations
of commutative rings. As a foundation for this study,
we will rely on the algebraic structure of sets of polynomi-
als. If R is a ring, then the entity denoted by R[x] is the
set of polynomials over the ring R. Each of these polynomials
is a formal expression in the indeterminate x of the
each coefficient is an element of R. The
set of Laurent polynomials over the ring R , denoted by
R [x], is the set of polynomials of the form
that we will
here require that l and m are finite, so that any of the
above polynomials only has a finite number of terms.
When an element of a ring (a "number") is represented
in positional notation, each digit has a weight equal to
some power of the radix. The radix is in itself an element
of the ring and the digits are elements of a finite subset of
the ring, this subset is termed the digit-set. An element
of the ring may be represented by an algebraic structure
termed a radix polynomial. These polynomials are similar
to the polynomials over a ring, and may be thought of
as the algebraic objects expressible by a number system
characterized by a fixed base or radix fi 2 R and a digit-set
\Sigma. In this paper we will assume that the zero element of
the ring is always a member of the digit-set, and that the
base is not equal to the zero element, neither is it a unit of
the ring. For instance if the ring of integers,
then we will assume 0 2 \Sigma and jfij ? 1.
Definition 2.1 Radix Polynomials over \Sigma
with
In analogy with the definition of Laurent polynomials, assuming
exists in some extension of R, we define:
Definition 2.2 Extended Radix Polynomials over \Sigma
with
The extended radix polynomials may be thought of as algebraic
objects representing elements (numbers) with fractional
digits. If we replace [fi] by fi in a radix polynomial,
we evaluate the polynomial in the point
determine the element of the ring that the polynomial rep-
resents. This procedure may be formalized by the following
function defined as the evaluation mapping
The radix polynomials map into the ring R, and the extended
radix polynomials map into the set of elements defined
as the fi \Gamma ary elements:
Example: For the ring
the binary numbers, A 8 the octal numbers, and A 10 the
decimal numbers. 2
Observe that i in (4) may be negative, so the fi \Gamma ary
elements may contain fractional parts, but note that then
e.g., A 2 since i is finite.
Since the evaluation mapping k \Delta k is a homomorphism
from P[fi; R] into A fi , arithmetic in the ring A fi can be performed
in the ring of (extended-) radix polynomials, while
preserving a correct representation of the elements in A fi .
But the evaluation mapping is not an isomorphism, since
it is not necessarily one-to-one, for instance if one element
can be represented by more than one radix polynomial.
The goal of our study is to determine criteria for which
radix polynomials written over a digit-set sufficiently represents
a ring. By sufficiently we will understand that the
number system is capable of representing all the elements
of the ring, in the sense that for each element of the ring
there should exist at least one radix polynomial that represents
this element.
Definition 2.3 Completeness
A digit-set \Sigma is complete base fi for the ring R if and only
if
8r
The definition of completeness has deliberately been defined
based on the radix polynomials and not on the extended
radix polynomials, since in the latter case this would
lead to some obscure digit-sets being complete, i.e., digit-
sets where fractional digits are needed to represent the non-
fractional elements of A fi . An example being
fractional digits are needed
to express the odd integers. On the other hand, if a digit-
set \Sigma is complete base fi for the ring R, it is also complete
for the ary elements, in the following sense:
Let hzi denote the ideal I = fkzjk 2 Rg, generated by
z in the ring R, then the set r + I is termed a co-set.
Furthermore let R=I denote the set of distinct co-sets, and
jR=Ij the number of distinct co-sets. We will say that two
elements are congruent modulo I if r
and adopt the notation r 1 j r 2 mod I. If a set S has
exactly one element from each distinct co-set in R=I then
S is a complete residue system modulo I.
Example: For the ring of integers, the ideal generated by
i.e., all the elements divisible by fi. An example of a co-set
:g. The
set is a complete residue system modulo
fi, thus
Lemma 2.4 If \Sigma is complete base fi for the ring R, then
\Sigma contains a complete residue system modulo fi, and consequently
Proof: Let e 2 R. Since \Sigma is complete there exists a
polynomial of the form
with thus the element
e is represented by the residue class d 0
Consequently \Sigma contains a complete residue system
modulo fi. 2
The converse statement does not hold, e.g., the digit-set
1g is not complete base for the integers,
nor is although both are complete residue
systems modulo
turns out to be complete for
As previously noted, some digit-sets allow a single ring
element to be represented by numerous radix-polynomials,
these digit-sets are termed redundant.
Definition 2.5 Redundancy
A digit-set \Sigma is redundant base fi for the ring R if and
only if
and is non-redundant base fi if and only if
Redundancy can complicate the determination of the
sign or the range of a number, but the presence of redundancy
can also be desirable. By exploiting the redundancy,
arithmetic operations can be performed more efficiently,
e.g., addition and subtraction may then be performed with
limited carry propagation and hence in constant time.
The following lemma provides a condition for the presence
of redundancy.
Lemma 2.6 If \Sigma is complete base fi for the ring R, and
j\Sigmaj ? jR=hfiij, then \Sigma is redundant base fi.
that d 1 j d 2 (mod fi) thus 9k
Since \Sigma is complete base fi, there exists a polynomial P 2
I
with we conclude that \Sigma is redundant
base fi. 2
The difference between two congruent digits is a multiple
of the radix, if the factor is in the digit-set or is representable
then the digit-set is redundant. Thus redundancy
can also occur in non-complete digit-sets. For instance if
expressing the same element of the ring Z, thus \Sigma is re-
dundant. On the other hand \Sigma is not complete since no
negative integer can be expressed.
Lemma 2.7 If j\Sigmaj ? and the number of
elements from R that can be represented with radix polynomials
of degree at most n is bounded by \Phi n C \Delta k n
then \Sigma is redundant base fi.
Proof: Let ng be the
set of radix polynomials of degree at most n. The number
of such polynomials is jQn . The
ratio:
has a limit value of zero as n tends towards infinity, thus
there will be more polynomials than elements to represent,
e.g., \Sigma is redundant base fi. 2
Theorem 2.8 For the ring of integers (i.e.,
j\Sigmaj ? jZ=hfiij then \Sigma is redundant base fi.
Proof: Consider the ring of integers
we have
the largest numerical value that can be represented by a
radix polynomial of degree at most n is given by
thus the number of integers that can be represented is
bounded by
As demonstrated the condition of Lemma 2.7 is satisfied,
thus j\Sigmaj ? jZ=hfiij implies that \Sigma is redundant base fi. 2
A similar result can be proven for the ring of Gaussian
integers (see Lemma 5.1), in fact we have been unable to
find rings where j\Sigmaj ? jR=hfiij does not imply that \Sigma is re-
dundant, thus it seems likely that the following conjecture
holds.
Conjecture 2.9 If j\Sigmaj ? jR=hfiij, then \Sigma is redundant
base fi.
Lemma 2.10 If there exists no digits
belonging to the same residue class modulo fi (i.e., j\Sigmaj
then \Sigma is non-redundant base fi for the ring R.
Proof: Assume that
l
the smallest index such that p k 6= q k then
and consequently p k j q k (mod fi), a contradiction. 2
As stated above, the amount of redundancy is closely
related to the size of the digit-set, so we define the redundancy
index of a digit-set \Sigma, as
From Lemma 2.4 we note that a negative redundancy
index implies that \Sigma cannot be complete, and for rings
satisfying Conjecture 2.9, that a positive index implies that
the digit-set is redundant, and finally from Lemma 2.10
that an index less than or equal to zero implies that the
digit-set is non-redundant.
If R is an integral domain, R is said to be ordered if and
only if R contains a non-empty subset R + such that
1.
2. Each element of R belongs to exactly one of the sets
g.
The set R + is termed the positive elements of R. As an
example one easily checks that the integers are ordered,
since they can be divided into three sets, namely Z
0g.
Definition 2.11 If R is ordered, a digit-set \Sigma is termed
semi-complete base fi for the ring R, if and only if \Sigma is
complete base fi for the positive elements in the sense
that
8r
If a digit-set is semi-complete for a ring R, then by definition
all the positive elements of the ring can be represented,
thus if an element of R is represented by its magnitude (i.e.,
a positive element), along with a sign indicating whether
the element belongs to R
of the ring can be represented. Historically these representations
are referred to as sign-magnitude representations.
III. Determining a Radix Representation
This section covers the problem of determining a radix
representation of a ring element, given a base and a finite
digit-set. It will generally be assumed that the ring R is an
integral domain, and that the ring is normed, in the sense
that there exists a norm N We will assume
that the norm satisfies 8a; b 2 R:
1.
2.
3.
Furthermore we will assume that given a real number k 2
exists only a finite number of elements in R that
has at most norm k, i.e.,
This assumption is needed for the termination of algo-
rithms, essentially assuring that the ring is "discrete", not
having condensation points.
If \Sigma is a complete residue system modulo fi, for any element
r 2 R the following algorithm terminates after a
finite number of steps. The correctness follows from arguments
similar to those of the proof below of Theorem
3.2.
Algorithm 3.1 DGT-Algorithm
Stimulus: A base fi, a digit-set \Sigma that is a complete
residue system modulo fi, and an element r 2
R.
Response:
with
or
while r l 6= 0 and OK do
h find d l
r l+1 / (r l \Gamma d l )=fi
l
Example: Consider the ring of Gaussian integers
\Gamma1, and the number system
1g. Using the DGT-algorithm we will
determine a radix polynomial representing the Gaussian
integer
thus d
fi , and the DGT-algorithm
proceeds as indicated in the following table, and as depicted
in
Figure
1.
l
r l 3
Thus the radix polynomial
is a representation of
r 4
r 6
Fig. 1. DGT-algorithm example: Conversion of 3
polynomial from P I 1g]. The black dots represents the
ideal
The following theorem is based on the DGT-algorithm,
and provides a test for the completeness of a digit-set,
showing that it is sufficient to check the representability
of a small set of ring elements.
Theorem 3.2 Let R be a ring, and N
Let \Sigma be a digit-set containing a complete residue system
modulo fi, then \Sigma is complete base fi for the ring R if and
only if
8r
\Sigma 0 is a complete residue system modulo fi.
Proof: If \Sigma is complete then by definition 8r
I thus assume (6) holds. Choose any
r 2 R, and in analogy with the DGT-algorithm choose a
sequence of digits d :, from the remainders
(this is possible since \Sigma 0 contains a complete residue system
modulo fi). Form the subsequent remainders as:
Notice that fi divides r
From the properties of the norm N we deduce:
thus
Since there exists only a finite number of elements of
norm at most N (r 0 ), after a finite number of steps we arrive
at some remainder r k , such that
By assumption there exists a polynomial P 2 P I [fi; \Sigma] with
and by the recurrence (7) we have
thus the polynomial P
with value kP r is a representation of r, and \Sigma is
complete base fi. 2
Corollary 3.3 Let R be an ordered ring, with positive elements
be a digit-set
containing a complete residue system modulo fi, then \Sigma is
semi-complete if and only if
8r
contains
a complete residue system modulo fi.
Theorem 3.2 together with the DGT-algorithm can be
used to establish the completeness of a digit-set.
Example: In Section II it was claimed that the digit-
set
proper subset is complete. By Theorem 3.2 it is sufficient
to check the representability of the members of the
set can be chosen, using
the absolute value as the norm. Here it turns out that 2
is most difficult to represent, needing the digit \Gamma13 as well
as \Gamma19 in its string representation 1 0 0 19 19 13, whereas
there is no finite representation if only one of these digits
is available. 2
As an example, employing the absolute value as a norm
on the integer ring, Theorem 3.2 can be used to check when
a traditional, contiguous digit-set for the integers is com-
plete, e.g., it can be used in the proof of the following
lemma.
Lemma 3.4 The digit-set
complete for the ring of integers Zif
(rs
and non-complete otherwise.
Whenever \Sigma is complete for base fi, by definition any
a 2 R has a radix polynomial representation, but Algorithm
3.1 (DGT) can only be applied when \Sigma is non-redundant
(\Sigma is a complete residue system modulo fi).
Theorem 3.2 provides a clue to a modified algorithm which
can be applied also for redundant digit-sets. The problem
here is that there may be infinitely many radix polynomial
representations of a given a 2 R, and to assure termination
a finite representation must be enforced. But for any
a satisfying (6) we can choose a finite, shortest, canonical
representation R a 2 P[fi; D], kR a a, and these representations
can then be tabulated. We may thus formulate
a modified DGT-algorithm as follows:
Algorithm 3.5 DGT-Algorithm for complete digit-sets
Stimulus: A base fi, a digit-set \Sigma that is complete for
[fi], and an element r 2 R.
Response: A radix polynomial
with
while r l 6= 0 do
if N (r l )
then
h choose d l as the least significant digit
of the canonical representation of r l i
else
choose some d l 2 \Sigma such that
r l+1 / (r l \Gamma d l )=fi
l
The algorithm thus needs a table containing for each r
with N (r)
, the value of the least significant
digit of the canonical polynomial of value r. If there is
more than one polynomial of the lowest possible degree
representing r, any one can be chosen as the canonical
representation, except when the degree is zero where the
canonical representation has to be chosen as a digit in the
digit-set to assure termination.
IV. Conversion between Radix Systems
In practice, the most likely situation is that an element of
a ring is given in some radix representation (the source sys-
tem), but a conversion is needed into some other radix system
(the target system) over the same ring. Algorithm 3.5
is sequential, and hence does not exploit the possibilities
of parallelism available if the digit-set of the target system
is redundant.
In general a conversion may be needed between systems
with different radices (a base conversion), which can be
performed by evaluating the source polynomial, and then
applying one of the above DGT algorithms mapping the
element into the target system.
However in some cases it is possible two use a parallel
procedure. In general this is the case if some power of the
source base equals some power of the target base.
Definition 4.1 The radices fi S and fi T both elements of
the same ring, are termed compatible if there exists integers
1g such that
For instance are compatible bases, since
are compatible
since 8. Note that if fi S and fi T are compatible bases,
then A fi suggesting that if \Sigma T is complete base fi T ,
then for all there exists at least one radix
polynomial
To convert a polynomial P 2 into a polynomial
in a representation using a compatible base fi T , we
might proceed as follows (assuming fi p
1. Convert from (fi S ; \Sigma S ) into (ffifi p
2. Convert from (fi q
The first step of the conversion amounts to grouping digits
and evaluating these groups as digits in an intermediate
digit-set
The second step is a bit more complicated, since this step
involves splitting digits from the set \Sigma 0 into groups of q digits
from a final digit-set \Sigma 00 . The final digit-set \Sigma 00 should
be chosen such that for each digit d 0 2 \Sigma 0 , there should
exist a q-digit radix polynomial in P I [fi representing
this digit. The existence of such a digit-set is shown in the
following lemma [14].
Lemma 4.2 If fi 2 R, \Sigma 0 ae R and q ? 0 then there exists
a digit-set \Sigma 00 ae R such that
furthermore if \Sigma 0 is redundant base fi q , then \Sigma 00 is necessarily
redundant base fi.
Example: Let the source system be defined by fi
and and the target base be fi
i. The bases are compatible since fi 2
The intermediate digit-set is calculated from (13) as
This digit-set is redundant base ffi fi p
\Gamma4, thus from
Lemma 4.2 the final digit-set \Sigma 00 must also be redundant. It
is easily shown that the redundant digit-set \Sigma
satisfies (14).
A table of q-digit polynomials from P I [fi representing
the digits in \Sigma 0 can now be constructed. When
changing the base of a polynomial P 2
of digits are used to generate intermediate digits
from \Sigma 0 . From each of these digits groups of
from \Sigma 00 are generated.
As an example, let us convert the
from the source system into a polynomial from the target
system, using these steps.
-z
z -
z -
z -
Thus the polynomial
is a representation
of kPk in the target system
When converting a polynomial from the set P[fi; \Sigma S ] to
a polynomial in the P[fi; \Sigma T ] we will assume, that the two
systems are related in such a way, that there exists a p 1
and a carry set C ' R, such that for all c 2 C and d 2 \Sigma S
there exists c 0 2 C and e 0 2 \Sigma T such that
It is useful in certain contexts with p ? 1, e.g., when \Sigma S
and \Sigma T are subsets of some subring S of R, and there
exists a p such that fi p 2 S so that also C ae S. E.g., when
2i so
We may then define a conversion mapping
that for \Sigma T non-redundant ff p is unique, while for \Sigma T redundant
there are several possible mappings ff p . Also observe
that the computation of carries in a conversion can
take place as p parallel processes, each process taking care
of one "carry-chain".
Provided that the carry-set C is finite, a table of the
conversion mapping may be constructed. Each entry of
the table contains a pair c; e where c 2 C; e 2 \Sigma T . Let C
initially contain the zero element of R, and let d 2 \Sigma S be
a digit from the source system. Since the target system is
complete there exists
that kP 0 is to
be included in C. Repeat this for all d in \Sigma S , this takes
care of the zero element in C. Now repeat this process for
any c 2 C, mapping c+d into a pair c 0 ; e 0 such that
possibly adding new elements to
C and thus the need for new rows in the table. In this way
the complete carry set is deduced while constructing the
conversion mapping. Under the same conditions on R and
its norm as for the DGT-Algorithms (3.1 and 3.5) we can
now show termination of the construction:
Theorem 4.3 For any P[fi; \Sigma S ] and P[fi; \Sigma T ] in a normed
ring R with \Sigma T complete for fi, there exist a conversion
mapping,
carry-set C ae R.
Proof: Let C g.
Relating to the algorithm described above, define C
then easy to show that
hence N (c 0 S) must remain bounded, so for some
and the algorithm can terminate with a finite carry-
set
Example: Consider conversion from the source digit-set
into the target digit-set \Sigma
the base 1. Note that \Sigma S is redundant
base fi, whereas \Sigma T is non-redundant. The following table
shows the conversion mapping deduced while constructing
the complete carry-set.
1Employing the conversion mapping, it is possible to convert
in linear time, starting at the least significant posi-
tion, forwarding carries in the usual way. But using parallelism
it is possible to convert faster. For simplicity of
notation in the following, we shall now assume that
the generalization is trivial. Given a conversion mapping
define a set of carry-transfer
functions ffl d g d2\Sigma S ,
and another set of functions f d g d2\Sigma S ,
digit-mapping functions:
Note that fl d (c) is a function describing the mapping of an
incoming carry value (c) into its outgoing carry value
when "passing through" a particular digit value (d) being
converted. We can now immediately generalize carry-
transfer functions to strings of digits:
or
where ffi denotes functional composition. The function fl dk
thus describes the carry transfer through the digit
string d k d functional composition is asso-
ciative, the carries into all positions can be computed in
logarithmic time using parallel prefix computation.
This is optimal when \Sigma T is non-redundant, but with \Sigma T
redundant we expect to be able to perform the conversion
in parallel and in constant time. The idea in the multi-level
conversion is to perform it through (possibly several) con-
versions, each rewriting digits in parallel and moving carries
one position forward, where the carries are absorbed.
Note that the rewriting of a digit in each of these conversions
is independent of the rewriting of its neighbors. Thus
there is only a limited carry propagation, corresponding to
the fixed number of levels. This is the well-known technique
used in redundant addition, which is a special case
of conversion, converting from the digit-set consisting of
digits formed as sums of two digits, back into the original
digit-set. This type of addition can be performed in two or
possibly three levels of conversions for standard digit-sets.
Each conversion converts a digit d into some digit e and
a carry c so that adds an incoming
carry c 0 to e generating a digit e e. Introducing the
following notation for set operations in R
each conversion maps a digit-set \Sigma S ' \Sigma into the
digit-set employing some intermediate digit-
set \Sigma and carry-set C 6= f0g. The set \Sigma must be a complete
residue set, but as the target digit-set \Sigma T is redundant
there may be several choices for \Sigma where \Sigma ae \Sigma T .
Observation 4.4 Redundancy and completeness of the target
digit-set are necessary but not sufficient conditions for
the multi-level parallel, constant time conversion or addi-
tion. The target digit-set has to be of the form \Sigma
where the set \Sigma must contain a complete residue system
modulo fi and C 6= f0g.
This condition is trivially satisfied for the ordinary digit-
sets consisting of a contiguous set of integers, e.g., for
base 2 with \Sigma it is possible to choose either
time conversion is
possible from any subset of f\Gamma1; 0g+2f0;
respectively 1g. But the
redundant and complete digit-set f\Gamma1; 0; 1; 4g for base 3
does not allow such a splitting \Sigma
time conversion into it is not possible, e.g., the number
into itself or various digit strings
where 11 is substituted by 04. However, adding an extra
digit 5 makes constant time conversion possible from
any subset of the digit-set f\Gamma3; \Gamma2; 0;
3f\Gamma1; 0g, since f\Gamma1; 0;
agation. Observe that there is also an alternative splitting
of the digit-set f\Gamma1; 0; allowing
parallel conversion from subsets of f\Gamma1; 0; 2; 3; 4; 7g.
Obviously, in the ring Zthere are no particular reasons
not to use contiguous digit-sets of the form
rs easily seen to satisfy the
above conditions. Constant-time conversions into such redundant
digit-sets were shown possible in [10]. But the
conditions on the structure of digit-sets as expressed by
Observation 4.4 are of interest when investigating digit-
sets from more general rings, e.g., in the complex domain,
and in particular when the ring does not have a lattice
structure.
V. Representing Complex Numbers
Using the formal framework developed in Sections II and
III, we shall investigate possible radix representations of
the complex numbers. We will attempt to do this using
two different approaches, the first being by examining the
Gaussian integers, the second by examining a similar ring
that we will refer to as Eisenstein integers.
A. Representing the Gaussian Integers
The Gaussian integers is a lattice on the field of complex
numbers, defined as the set:
\Gamma1. It is a natural extension of the ring of
integers, and as such the number systems for the two rings
exhibit many common characteristics. Designing a number
system for complex numbers, involves facing a larger number
of decisions, than when designing a number system for
the integers, e.g., there is the possibility of choosing complex
or integer valued digit-sets and a complex or integer
base.
Initially we will examine a more general ring of algebraic
integers defined as:
with d 2 Z;d 1. Note that if ring is
the ring of Gaussian integers. Furthermore observe that
the function N : Z[
defined as N (a
is a norm on Z[
\Gammad], and that the set
is a complete residue system modulo fi,
\Gammad and d 2 if the cardinality of C satisfies
Lemma 5.1 For the ring
\Gammad], d 2 Z;d 1, if
\Gammad]=hfii
then \Sigma is redundant base
\Gammadfl.
Proof: The number of distinct residue classes is given by
\Gammadfl]
as can be derived
from classical results in algebraic number theory [18, pages
62 and 121]. Let and Qn be the
set of radix polynomials in P I [fi; \Sigma] with degree at most n.
The polynomials in Qn represent elements of Z[
\Gammad] that
have norms bounded by:
Since the norm of the base is N
the number
of elements that can be represented by radix polynomials
of degree at most n, is bounded by:
Thus by Lemma 2.7 the lemma is proven. 2
B. Complex Number Systems with an Integer Radix
The straightforward approach for representing the elements
of Z[i], is to choose an integer base and a complex
digit-set, e.g., fi 2 Zand
g. It is evident that if \Sigma r and \Sigma i are complete
digit-sets base fi for the integers, then
complete base fi for the Gaussian integers, furthermore if
\Sigma r or \Sigma i is a redundant digit-set base fi for the integers,
then \Sigma is redundant base fi for the Gaussian integers.
Example: The following base, digit-set combinations are
examples of the large number of possible number systems
that can be constructed combining two integer digit-sets.
1. Binary. 1g. Non-redundant
and non-complete.
2. Borrow-save. 1g.
Redundant and complete.
3. Carry-Borrow-save. 1g.
Redundant and non-complete but semi-complete. 2
These number systems are constructed such that the real
and imaginary parts of a number are written using respectively
the real and imaginary parts of the digit-set. This
has some obvious advantages since arithmetic can be based
on the conventional integer arithmetic algorithms [19]. Furthermore
converting from a conventional representation to
a complex representation and computing the complex conjugate
are fairly simple tasks.
C. Imaginary Radix Number systems
Instead of using an integer radix, we could alternatively
use a purely imaginary radix.
Lemma 5.2 The digit-set
complete base
\Gammad, d 2 Z;d ? 1 for the ring Z[
\Gammad]
if and only if \Sigma is complete base \Gammad for the integers.
If we allow a single extra digit immediately to the right
of the radix point in the definition of completeness, it is
in some cases possible to define number systems that are
not only complete for the ring Z[
\Gammad] but also for the
Gaussian integers.
Define the set of radix polynomials with one fractional
digit as
Definition 5.3 A digit-set \Sigma is fraction-complete base fi
for the ring R if and only if
8r
Lemma 5.4 If
\Gammad, d 2 Z;d ? 1 and
base fi for
\Gammad] then \Sigma is fraction-complete base fi for the Gaussian
integers if and only if
d is of the form
for some k 2 Z; jkj ? 1).
Proof: Assume that contains
a complete residue system modulo k 2 , there exists a
set
such that \Sigma 0 ae \Sigma and the set \Sigma is a complete
residue system modulo k.
Thus for any z 2 Zthere exists b 2 Z;d
such that z
Since \Sigma is complete base fi for Z[
\Gammak 2 ], for any a+
there exists a radix polynomial
that
Forming the polynomial P
value
conclude that \Sigma is fraction complete.
Assume
d is an irrational number. In
order to represent
\Gammad=
d we will implicitly have to
d using an extended radix polynomial with a
finite number of digits from P[\Gammad; \Sigma], this is obviously not
possible since 1=
d is an irrational number. 2
Classifying a number of different digit-sets, from Lemmas
3.4, 5.2 and 5.4 we derive the properties displayed in
Table
1.
Example: Imaginary Radix, Complex Number Representations
1. Binary.
\Gamma2].
Standard digit-set, non-redundant and complete.
2. Quarter-Imaginary.
3g.
Standard digit-set, non-redundant, complete and frac-
tion-complete (this number system was proposed by
Knuth in [9]).
3. Borrow-Save (Quarter-Imaginary).
3g. Maximally redundant digit-set, complete,
fraction-complete (addition in Redundant number systems
of this form has been examined in [13]). 2
D. Complex Radix Number Systems
As suggested in [16], we could use a fully complex base,
e.g., We will only here examine
number systems for which the digit-set contains exclusively
integer digits. Observe that the set
complete residue system modulo
and
Lemma 5.5 The digit-set
s A 2 and
for the Gaussian integers.
Proof: The proof is a slight generalization of the one
given in [8]. 2
Lemma 5.6 The symmetric digit-set
and for the Gaussian integers.
Proof: If we have that \Sigma
is complete base fi, thus for any a + ib 2 Z[i] there exists
a radix polynomial
such that
If conversely by forming the polynomial:
with value kP we conclude that \Sigma is also
complete base
The case analogous to the above. 2
The set of elements that can be represented using the
standard digit set \Sigma is a somewhat non-symmetric
set, whereas the set of elements that can be
represented using a symmetric redundant digit-set has a
higher degree of symmetry (see Figure 2).
(a) (b) (c)
Fig. 2. The Gaussian integers representable with radix polynomials
of degree 7, using the number systems (a):
in (c) the elements representable
using polynomials of degree 5 with the redundant number system
1g. The elements that lie within the circles all
have norms less than
1), thus from
Theorem 3.2 we immediately conclude that the number systems (a)
and (c) are complete.
Example: The following base, digit-set combinations are
examples of complete number systems.
1. Binary. 1g. Standard digit-set,
non-redundant and complete.
2. Borrow-save. 1g. Minimally
and maximally redundant digit-set, complete.
Digit-set r, s j\Sigmaj Redundant j
Extended
Balanced
Min. Red. \Gammad r 0 s d d
Max. Red.
Table
1. Classification of digit-sets for Z[ p \Gammad].
Representing Eisenstein Integers
This section is devoted to the study of the ring
2 (i.e., the third
complex root of unity). This ring is a lattice on the complex
field (
Figure
3), it is similar to the Gaussian integers, but
as will be shown it exhibits some interesting properties.
r
1+r
Fig. 3. The ring Z[ae] and the digit-set
Note that both Z[i] and Z[ae] are algebraic integers, since
i and ae are roots of the polynomials i 2 respectively
rational coefficients. Furthermore,
observe that the set
1g is a complete residue system modulo fi with fi 2 Zand
jfij ? 1.
Lemma 5.7 For the ring Z[ae], if j\Sigmaj ? jZ[ae]=hfiij then \Sigma
is redundant base fi 2 Z, jfij ? 1.
Proof: Analogous to the one given for Lemma 5.1. 2
From Lemma 5.7, we conclude that, if a digit-set has more
than digits, then the digit-set is redundant. It is easy
to see that the ring
is isomorphic to the ring Z[ae]. Thus we conclude
that since Z[ae] and W are essentially equivalent, we may
instead study W as a valid representation of the elements in
Z[ae]. For convenience we introduce the following notation:
Theorem 5.8 The digit-set
base Zfor the ring Z[ae] if fi ! \Gamma1.
Proof: Take any r = a 0 +a 1 ae 2 Z[ae]. In Section III it was
shown that the digit-set \Sigma
for the integers. Thus there exist polynomials P
I [fi; \Sigma z ] representing the (possibly negative) integers a 0
respectively a 1 . Forming the polynomial P
I [fi; \Sigma] with value kP we conclude that \Sigma is
As shown in [4] it can be beneficial to represent the ring
Z[ae] using the redundant set
N[f0gg. It was also shown that addition in H
is possible, using the following theorem, here reformulated
in our terminology:
Theorem 5.9 If a satisfies 2
a is
complete base fi 2 Zwith fi ? 1 for the set W, and
The proof in [4] is based on geometrical considerations of
set coverings. By similar arguments it is possible to show
the following additional set inclusion under the same conditions
on a and fi:
and complete for radix fi we expect to be able to perform
addition and conversion into this digit-set in parallel and
constant time. In particular the binary digit-set H
is complete and redundant base
2. The digits of this number system were depicted in
Figure
3, and addition in this digit-set was shown possible
in [4] by grouping three base 2 digits into digits of base 8.
As an illustration of the use of Observation 4.4 we shall
extend that result on addition in radix 2. The analysis
in [4] was not sufficient to assure addition for fi 6, but
using the set inclusion (21) and Observation 4.4 we shall
now show that conversion into, and addition in, H fi\Gamma1 is
possible for
Observation 4.4 cannot be applied here for
by Theorem 5.9 then the only possible value for a is 1, and
not a proper subset of H fi\Gamma1 .
For choosing
4.4 can be applied, and constant
time conversion from H 1
possible. But neither of the inclusions (20) and (21) are
really applicable here, in fact the largest set H t satisfying
However, for fi 4 it is possible to chose a 2. By
Observation 4.4 it is possible to convert from
base digit-set ring compl. redund. closed fal bpd eff
\Gamma2 f0; 1g Z[
\Gamma2] true false true
\Gamma2 f\Gamma1; 0; 1g Z[
\Gamma2] true true true
Table
2. Properties of low-radix systems for representing complex numbers
into
using also (21). But since a another conversion
can be converted into H fi\Gamma2
. Thus we have proved the following:
Lemma 5.10 For fi 4 conversion from the digit-set
into the digit-
set \Sigma can be performed in
constant time, with carries propagating at most two posi-
tions. In particular, addition in the set W can be realized
in constant time using the digit-set \Sigma T for
conversion into radix 4.
Since the ary numbers written over Z[i], respectively
W, are not identical, conversion between elements from the
two extended rings cannot be exact. This is due to the fact
that since ae = 1+i
2 and
2 is an irrational number, there
does not exist a finite extended radix polynomial in P[fi; \Sigma]
with fi 2 Zand \Sigma ae Z, that represents
.
VI. Conclusion
A summary of some properties of various low radix number
systems for representing complex numbers has been
compiled in the form of Table 2.
The last two columns of the table deal with the efficiency
of representation, bpd is the number of bits needed
to encode the digits, and eff is a measure of efficiency of
the combined representation and digit encoding, defined as
follows:
Thus eff is the asymptotic ratio between the number of bits
needed to encode the values representable by radix polyno-
mials, and the number of bits needed to represent the digits
of the polynomial, using a minimal binary encoding of the
digits. Note that the actual encoding of digits can only
influence the implementation logic of the primitives, and
thus only change the area and timing by constant factors.
In order to evaluate the relative merits of these number
systems, we will now turn our attention to arithmetic operations
performed in these systems. As for storage, and if
digit serial arithmetic is an application, an encoding using
few bits per digit (bpd ) will be desirable, since this will minimize
module size and inter module wiring. If fast addition
is needed the system should be redundant, and need as few
levels of logic as possible (fal is the number of full-adder
levels needed for parallel, constant time addition). Furthermore
if a digit-set is closed under multiplication (i.e., the
product of two arbitrary digits is again a digit), performing
division and multiplication on radix polynomials written
over the digit-set is simpler than if the digit-set is not
closed under multiplication. In the latter case, when forming
the product of a single digit and a number, the individual
digit by digit products will introduce a carry effect into
the neighboring positions. As an example the binary integer
system, e.g., closed group
under multiplication. For the Gaussian integers, the number
system does not share
this property, since for instance (1
thus \Sigma is not closed under multiplication. But the systems
seems very
promising.
However performing digit by register multiplication, as
required in various multiplication and division algorithms,
might also be relatively easy if the partial products can
be generated by a shifting process possibly combined with
negation. This is the reason why modified Booth recoding,
e.g., conversion from the non redundant system (4; f0;
into the redundant system (4; f\Gamma2; is popular in
multiplier design. It can be shown that with proper encod-
ing, partial products can be formed trivially in the system
using a simple shifting rule.
--R
"Automatic Maps in Exotic Number Systems"
"Real/Complex Reconfigurable Arithmetic Using Redundant Complex Number Systems"
"Signed-Digit Number Representations for Fast Parallel Arithmetic"
"New Representation of Complex Numbers and Vectors"
"Radix Representations of Quadratic Fields"
"Number Systems in Imaginary Quadratic Fields"
"Canonical Number Systems in Imaginary Quadratic Fields"
"Canonical Number Systems for Complex Integers"
"An Imaginary Number System"
"Digit-Set Conversions: Generalizations and Applications"
"Radix Arithmetic: Digital Algorithms for Computer Architecture"
"Basic Digit Sets for Radix Repre- sentation"
"Borrow-Save Adders for Real and Complex Number Systems"
"Generalized Base and Digit-Set Conversion"
"On Radix Representation of Rings"
"A 'Binary' System for Complex Num- bers"
"On the Implementation of Arithmetic Support Functions for Generalized Signed Digit Number Systems"
"Algebraic Number The- ory"
"A Complex-Number Multiplier using Radix-4 Digits"
--TR
--CTR
Dhananjeuy S. Phatak , Tom Goff , Israe Koren, Constant-Time Addition and Simultaneous Format Conversion Based on Redundant Binary Representations, IEEE Transactions on Computers, v.50 n.11, p.1267-1278, November 2001
Marc Daumas , David W. Matula, Further Reducing the Redundancy of a Notation Over a Minimally Redundant Digit Set, Journal of VLSI Signal Processing Systems, v.33 n.1, p.7-12, January-February | number system conversion;radix representation of rings;redundancy;computer arithmetic;integer and computer radix number systems |
323812 | Active Management of Data Caches by Exploiting Reuse Information. | AbstractAs microprocessor speeds continue to outpace memory subsystems in speed, minimizing average data access time grows in importance. Multilateral caches afford an opportunity to reduce the average data access time by active management of block allocation and replacement decisions. We evaluate and compare the performance of traditional caches and multilateral caches with three active block allocation schemes: MAT, NTS, and PCS. We also compare the performance of NTS and PCS to multilateral caches with a near-optimal, but nonimplementable policy, pseudo-opt, that employs future knowledge to achieve both active allocation and active replacement. NTS and PCS are evaluated relative to pseudo-opt with respect to miss ratio, accuracy of predicting reference locality, actual usage accuracy, and tour lengths of blocks in the cache. Results show the multilateral schemes do outperform traditional cache management schemes, but fall short of pseudo-opt; increasing their prediction accuracy and incorporating active replacement decisions would allow them to more closely approach pseudo-opt performance. | Introduction
Minimizing the average data access time is of paramount importance when designing high-performance
machines. Unfortunately, access time to o-chip memory (measured in processor
clock cycles) has increased dramatically as the disparity between main memory access
Work done while at the University of Michigan.
times and processor clock speeds widen. The eect of this disparity is further compounded as
multiple-issue processors continue to increase the number of instructions that can be issued
each cycle. There are many approaches to minimizing the average data access time. The
most common solution is to incorporate multiple levels of cache memory on-chip, but still
allocate and replace their blocks in a manner that is essentially the same as when caches
rst appeared three decades ago.
Recent studies [23][16][10][8][18] have explored better ways to congure and manage a
resource as precious as the rst-level (L1) cache. Active cache management (active block
allocation and replacement) can improve the performance of a given size cache by maintaining
more useful blocks in the cache; active management retains reuse information from
previous tours of blocks and uses it to manage block allocations and/or replacements in
subsequent tours 1 . In order to partition the allocation of blocks within a cache structure,
several proposed schemes [16][10][8][18] incorporate an additional data store within the L1
cache structure and intelligently manage the state of the resulting multi-lateral 2 [17] cache
by exploiting reuse pattern information. These structures perform active block allocation,
but still relegate block replacement decisions to simple hardware replacement algorithms.
While processor designers typically design for the largest possible caches that can still t
on the ever growing processor die, multi-lateral designs have been shown to perform as well
as or better than larger, single structure caches while requiring less die area [17][22]. For a
given die size, reducing the die requirements to attain a given rate of data supply can free
that space for other resources { for example, to dedicate more space to branch prediction,
data forwarding, instruction supply, and the instruction reorder buer.
In this paper we evaluate the performance of three proposed cache schemes that perform
active block allocation and compare their performance to one another and to traditional
single-structure caches. We implement the MAT [10], NTS [16], and PCS [18] schemes using
1 A tour of a cache block is the time interval between an allocation of the block in cache and its subsequent
eviction. A given memory block can have many tours through the cache.
We use the term multi-lateral to refer to a level of cache that contains two or more data stores that have
disjoint contents and operate in parallel.
hardware that is as similar as possible in order to do a fair comparison of the block allocation
algorithms that each uses. Our experiments show that making placement decisions based
on eective address-based block reuse, as in the NTS scheme, outperforms the macroblock-
based and PC-based approaches of MAT and PCS, respectively. All three schemes perform
comparably to larger direct-mapped caches and better than associative caches of similar size.
We then examine the performance of optimal and near-optimal multi-lateral caches to
determine the performance potential of multi-lateral schemes. Optimal and near-optimal
schemes excel in block replacement decisions, while their block allocation decisions are a
direct consequence of the replacement decision. We compare the performance of two implemented
multi-lateral schemes to the near-optimal scheme to determine the reason for their
performance. For the implemented schemes to perform better, improvements need to be
made in their block allocation and replacement choices.
The rest of this paper is organized as follows. Section 2 discusses techniques that aid in
reducing the average data access time. Section 3 discusses active cache management in detail
and presents past eorts to perform active block allocation. Section 4 presents our simulation
methodology and Section 5 evaluates the performance of the three multi-lateral schemes. In
Section 6, we present the performance of optimal and near-optimal multi-lateral schemes,
which perform (near-)optimal replacement of blocks, and compare the decisions made in the
near-optimal scheme to those made in two of the implementable schemes. Conclusions are
given in Section 7.
Background
There are many techniques for reducing or tolerating the average memory access time.
Prominent among these are: 1) store buers, used to delay writes until bus idle cycles in order
to reduce bus contention; 2) non-blocking caches, which overlap multiple load misses while
fullling other requests that hit in the cache [19][12]; hardware and software prefetching
methodologies that attempt to preload data from memory to the cache before it is needed
[5][1][4][6][15]; and caching [11], which improves the performance of direct mapped
caches through the addition of a small, fully associative cache between the L1 cache and
the next level in the hierarchy. While these schemes do contribute to reducing average data
access time, this paper approaches the problem from the premise that the average data access
time can be reduced by exploiting reuse pattern information to actively manage the state of
the L1 cache. This approach can be used with these other techniques to reduce the average
data access time further.
3 Active Cache Management
Active cache management can be used to improve the performance of a given size cache
structure by controlling the data placement and management in the cache to keep the active
working set resident, even in the presence of transient references. Active management of
caches consists of two parts: allocation of blocks within the cache structure on a demand
miss 3 and replacement of blocks currently resident in the cache structure 4 . Block allocation
in today's caches is passive and straightforward: blocks that are demand fetched are placed
into their corresponding set within the cache structure. However, this decision does not take
into consideration the block's usefulness or usage characteristics. Examining the past history
of a given block is one method of aiding in the future allocations of the block. Decisions
can range from simply not caching the target block (bypassing) to placing the block in a
particular portion of the cache structure, with the hope of making best use of the target
cache block and the blocks that remain in the cache.
Simple block replacement policies are used to choose a block for eviction in today's caches,
and this choice is often suboptimal. For a multi-lateral cache, the allocation and replacement
problems are coupled. In particular, the blocks that are available for replacement are a direct
consequence of the allocation of a demand-missed block.
Recently, several approaches to more ecient management of the L1 data cache via block
3 Allocation decisions for blocks not loaded on a demand miss, e.g. prefetched blocks in a streaming
buer scheme as proposed in [11], and bypassing schemes are not considered here. However, schemes that
make proper bypass decisions and allocation decisions for prefetched data can further improve upon the
performance of the schemes evaluated herein.
4 We consider only write-allocate caches in this paper. Write no-allocate caches follow a subset of these
rules, where writes are not subject to these allocation decisions.
allocation decisions have emerged in the literature: NTS [16], MAT [10], Dual/Selective [8],
and PCS [18]. However, none of these approaches makes sophisticated block replacement
decisions, and instead relegates these decisions to their respective cache substructures.
3.1 The NTS Model
The NTS (nontemporal streaming) cache [16] is a location-sensitive cache management
scheme that uses hardware to dynamically partition cache blocks into two groups, temporal
(T) and nontemporal (NT), based on their reuse behavior during a past tour. A block is
considered NT if during a tour in L1, no word in that block is reused. Blocks classied as
NT are subsequently allocated in a separate small cache placed in parallel with the main L1
cache; all other blocks (those marked T and those for which no prior information is available)
are handled in the \main" cache. Data placement is decided by using reuse information
that is associated with the eective address of the requested block. The eectiveness of
NTS in reducing the miss ratio, memory trac, and the average access penalty has been
demonstrated primarily with mostly numeric programs.
3.2 The MAT Model
The MAT (memory address table) cache [10] is another scheme based on the use of eective
addresses; however, it dynamically partitions cache data blocks into two groups based on their
frequency of reuse. Blocks become tagged as either Frequently or Infrequently Accessed. A
memory address table is used to keep track of reuse information. The granularity for grouping
is a macroblock, dened as a contiguous group of memory blocks considered to have the same
usage pattern characteristics. Blocks that are determined to be Infrequently Accessed are
allocated in a separate small cache. This scheme has shown signicant speedups over generic
caches due to improved miss ratios, reduced bus trac, and a resulting reduction in the
average data access latency.
3.3 The Dual Cache/Selective Cache Model
The Dual Cache [8] has two independent cache structures, a spatial cache and a temporal
cache. Cache blocks are dynamically tagged as either temporal or spatial. A locality prediction
table is used to maintain information about the most recently executed load/store
instruction. The blocks that are tagged neither spatial nor temporal do not nd a place in the
cache and bypass the cache. This method is more useful in handling vector operations which
have random access patterns or very large strides and introduce self interference. However,
its two caches do not necessarily maintain disjoint contents. The temporal cache is designed
to have a smaller line size compared to the spatial cache. If the required data is found in
both caches it is read from the temporal cache or written into both in parallel. In order to
overcome this replication and coherence problem, the authors proposed a simplied version
of the Dual Cache, called the Selective Cache. The Selective Cache has only one memory
unit like a conventional cache, but incurs more hardware cost due to its locality prediction
table, as in the Dual Cache. Only data exhibiting spatial locality or temporal locality that
is not self-interfering is cached. For most of the benchmarks in their study, this scheme was
shown to perform better than a conventional cache of the same size. The Selective Cache
itself is an improvement of the Bypass Cache [7], which relies on compiler hints to decide
whether a block is to be cached or bypassed.
3.4 The PCS Model
The PCS (program counter selective) cache [18] is a multi-lateral cache design that evolved
from the CNA cache scheme [23]. The PCS cache decides on the data placement of a block
based on the program counter value of the memory instruction causing the current miss,
rather than on the eective address of the block as in the NTS cache. Thus, in PCS, the tour
performance of blocks recently brought to cache by this memory accessing instruction, rather
than the recent tour performance of the current block being brought to the cache, is used
to determine the placement of this block. The performance of PCS is best for programs in
which the reference behavior of a given datum is well-correlated with the memory referencing
instruction that brings the block to cache.
3.5 Other Multi-Lateral Cache Schemes
Several other cache schemes can be considered multi-lateral caches, such as the Assist
cache [13] used in the HP PA-7200, and the Victim cache [11]. However, neither of these
schemes actively manage their cache structures using reuse information obtained dynamically
during program execution. The Assist cache uses a small data store as a staging area for
data entering the L1 and potentially prevents data from entering the L1 when indicated by a
compiler hint. The Victim cache excels in performance when a majority of the cache misses
are con
ict misses which result from the limited associativity of the main cache; the buer in
the Victim scheme serves to dynamically increase the associativity of a few hot spots in the
(typically direct-mapped) main cache. While both schemes have been shown to perform well
[22], they each require a costly data path between the two data stores to perform the data
migrations they require. Without the inter-cache data path present, these schemes cannot
operate, as they use no previous tour information for actively deciding which data store to
allocate a block to. While the Victim cache has been shown to perform well relative to the
above actively-managed schemes when the main cache is direct-mapped [18], in this paper
we evaluate only multi-lateral schemes that use dynamic information to allocate data among
two data stores with no direct data path between them.
4 Simulation Methodology
A simulator and a set of benchmark programs were used to compare the performance of the
multi-lateral cache strategies. This section describes the dynamic superscalar processor and
memory simulators used to evaluate these cache memory structures, the system conguration
used, and the methods, metrics, and benchmarks that constitute the simulation environment.
4.1 Processor and Memory Subsystem
The processor modeled in this study is a modication of the sim-outorder simulator in the
SimpleScalar [3] toolset. The simulator performs out-of-order (OOO) issue, execution, and
completion on a derivative of the MIPS instruction set architecture. A schematic diagram
of the targeted processor and memory subsystem is shown in Figure 1, with a summary of
the chosen parameters and architectural assumptions.
The memory subsystem, modeled by the mlcache tool discussed below, consists of a separate
instruction and data cache and a perfect secondary data cache or main memory. The
instruction cache is perfect and responds in a single cycle. The data cache is modeled as
Fetch Mechanism fetches up to 16 instructions in program
order per cycle
Branch Predictor perfect branch prediction
Issue Mechanism out-of-order issue of up to 16 operations
per cycle, 256 entry instruction re-order
buffer (RUU), 128 entry load/store queue
loads may execute when all prior
store addresses are known
Functional Units
MULT/DIV, 8 FP MULT/DIV, 8 L/S units
F. U. Latency
INT ALU:1/1, INT MULT:3/1, INT
FP DIV:12/12, L/S:1/1
Instruction Cache perfect cache, 1 cycle latency
Data Cache Multi-Lateral L1
write-allocate,
latency, latency, non-block-
ing, 8 memory ports
Processor
I
Cache
Data
Cache
Secondary Cache /
Main Memory
Figure
1: Processor and memory subsystem characteristics.
a conventional data cache split into two subcaches disjoint contents) and
placed in parallel within L1. In this multi-lateral cache, each subcache is unique with its
own conguration: size, set-associativity, replacement policy, etc. The A and B caches are
probed in parallel, and are equidistant from the CPU. Both A and B are non-blocking with
32-byte lines and single cycle access times. A standard (single-structured) data cache model
would simply congure cache A to the desired parameters and set the B cache size to zero.
The L2 cache access latency is bus between L1 and L2 has
bytes/cycle data bandwidth. L1 to L2 access is fully pipelined; a miss request can be sent
on the L1-L2 bus every cycle for up to 100 pending requests. The L2 cache is modeled as a
perfect cache in order to focus this study on the management strategies for the L1.
4.2 The mlcache Simulation Tool
mlcache [22] is an event-driven, timing-sensitive cache simulator based on the Latency
Eects (LE) cache timing model, discussed in depth in [21]. It can be easily congured to
model various single and multi-lateral cache structures by using its library of cache state and
data movement routines. For interactions not modeled in the library routines, users can write
Support Routine Description
check_for_cache_hit() check to see if an accessed block is present in the cache
update() place an accessed block into the cache
move_over() move an accessed block from one cache to another
do_swap() move an accessed block from cache1 to cache2 and move the
evicted block to cache1
do_swap_with_inclusion() place an accessed block into both cache1 and cache2 and move the
evicted block from cache2 to cache1
do_save_evicted() move the block evicted from cache1 to cache2
find_and_remove() remove a block from a cache
check_for_reuse() determine if a block exhibits temporal behavior (word reuse)
Table
1: The basic support routines provided with the mlcache simulator. The user can call these routines
from a conguration le to control the cache state and interactions.
their own management routines and call them from the simulator. The tool can be easily
joined to a wide range of event-driven processor simulators. As described above, our processor
model in this work is based on the SimpleScalar toolset. Together, a combined processor-and-
cache simulator, such as SimpleScalar+mlcache, can provide detailed evaluations of multiple
cache designs running target workloads on proposed processor/cache congurations.
mlcache is easily retargetable due to the provision of a library of routines that a user can
choose from to perform the actions that should take place in the cache in each situation. The
routines are accessed from a single C le, named cong.c. The user simply modies cong.c to
describe all of the desired interactions between the caches, processor, and memory. The user
also controls when the actions occur via the delayed update mechanism built into the cache
simulator. Delayed update is used to allow a behavioral cache simulator, such as DineroIII
[9], to account for latency for latency- or latency-adding eects. The use of delayed update
causes the eects of an access, i.e. an access' placement into the cache, the removal of the
replaced block, etc. to occur only after the calculated latency of the access has passed. Table
1 shows the routines provided and a brief description of each. If more interactions are needed
than these, additional library routines can be added. However, from these brief examples it
is easy to see that this modular, library-based simulator already allows a signicant range
of cache congurations to be examined.
We evaluate the performance of three of the multi-lateral schemes, MAT, NTS, and PCS,
and compare their performance to three traditional, single-structure caches: a 16K direct-mapped
cache, a 16K 2-way associative cache, and a 32K direct-mapped cache. The cong-
urations of the evaluated caches are shown in Table 2.
4.3 Simulated Cache Schemes
Performing a realistic comparison among the program counter and eective address schemes
requires detailed memory simulators for the MAT, NTS, and PCS cache management schemes
described above. We chose to omit the Selective Cache, as its block allocation decisions are
similar to those made by PCS, while PCS' hardware implementation is simpler. To ensure
a fair comparison and evaluation, we placed all the management schemes on the same
platform within a uniform multi-lateral environment, using the mlcache tool. Each of the
congurations includes a 32-entry structure that stores reuse information, as described for
each scheme.
The following subsections describe our implementations of the MAT, NTS, and PCS cache
management schemes. The main cache is labeled cache A, the auxiliary buer is labeled cache
B, and both caches are placed equidistant from the CPU. The three schemes are congured
to be as similar as possible to one another so that their performance dierences can be
attributed primarily to dierences among the block allocation decisions that they make.
4.3.1 Structure and Operation of the NTS Cache
The NTS cache, using the model in [18], which was adapted from the scheme proposed
in [16], actively allocates data within L1 based on each block's usage characteristics. In
particular, blocks known to have exhibited only nontemporal reuse are placed in B, while
the others (presumably temporal blocks) are sent to A. This is done in the hope of allowing
temporal data to remain in the larger A cache for longer periods of time, while shorter
lifetime nontemporal data can for a short while be quickly accessed from the small, but more
associative B cache.
On a memory access, if the desired data is found in either A or B, the data is returned
to the processor with 0 added latency, and the block remains in the cache in which it is
Single MAT NTS PCS
Cache A A B A B A B
Size 16K/16K/32K 16K 2K 16K 2K 16K 2K
Associativity 1/2/1 1 full 1 full 1 full
Replacement policy -/LRU/- LRU - LRU - LRU
latency to next level
Table
2: Characteristics of the four congurations studied. Times/latencies are in cycles.
found. On a miss, the block entering L1 is checked to see if it has an entry in the Detection
Unit (DU). The DU contains temporality information about blocks recently evicted from
L1 and is managed as follows. Each entry of the DU describes one block and contains a
block address (for matching) and a T/NT bit (to indicate the temporality of its most recent
tour). On eviction, a block is checked to see if it exhibited temporal reuse (i.e. if some word
in the block was referenced at least twice) during this just-completed tour in the L1 cache
structure, and its T/NT bit is set accordingly in the DU. If no corresponding DU entry is
found for the evicted block, a new DU entry is created and made MRU in the DU structure.
On a miss, if the new (missed) block address matches an entry in the DU, the T/NT bit
of that entry is checked and the block is placed in A if it indicates temporal, and B if not.
The DU entry is then made MRU in the DU so that it has a better chance of remaining in
the DU for future allocation predictions. Thus, each creation or access of an entry in the
DU is treated as a \use" and the DU (with 32 entries, in these simulations) is maintained
with LRU replacement. If no matching DU entry is found, the missed block is assumed to
be temporal and placed in A.
4.3.2 Structure and Operation of the PCS cache
The PCS cache [18] decides on data placement based on the program counter value of
the memory instruction causing the current miss, rather than on the eective address of the
block as in the NTS cache. Thus, the performance of blocks missed by individual memory
accessing instructions, rather than individual data blocks, determines the placement of data
in the PCS scheme.
The PCS cache structure modeled is similar to the NTS cache. The DU is indexed by the
memory accessing instruction's program counter, but is updated in a manner similar to the
NTS scheme. When a block is replaced, the temporality bit of the entry associated with the
PC of the memory accessing instruction that brought the block to cache at the beginning of
this tour is set according to the block's reuse characteristics during this just-completed tour
of the cache. If no DU entry matches that PC value, one is created and replaces the LRU
entry in the DU. If that instruction subsequently misses, the loaded block is placed in B if
the instruction's PC hits in the DU and the prediction bit indicates NT; otherwise the block
is placed in A. If the instruction misses in the DU, the data is placed in A.
4.3.3 Structure and Operation of the MAT cache
The MAT cache [10] structure has a Memory Address Table (MAT) for keeping track of
reuse information and for guiding data block placement into the A or B cache of the L1
structure. In this implementation, the MAT is a 32-entry fully associative structure (like the
DU in NTS and PCS). Note, however, that the original implementation of MAT (reported
in [10]) used a 1K entry direct mapped table. Each MAT entry consists of a macroblock
address and an n-bit saturating counter. An 8-bit counter and a 1KB macroblock size is
used here, as in the original study.
On a memory access, caches A and B are checked in parallel for the requested data.
At the same time, the counter in the corresponding MAT entry for the accessed block is
incremented; if there is no corresponding entry, one is created, its counter is set to 0, and the
LRU entry in the MAT is replaced. This counter serves as an indicator of the \usefulness"
of a given macroblock, and is used to decide whether a block in that macroblock should be
placed in the A or B cache during its next tour.
On a cache miss, the macroblock address of the incoming block is used as an index into
the MAT. If an entry exists, its counter value is incremented and compared against the
decremented counter of the macroblock corresponding to the block that would be replaced
if the incoming block were to be placed in the A cache. The counter is decremented to
ensure that that data can eventually be replaced; the counter of the resident data will
continue to decrease if it is not reaccessed often enough and it continues to con
ict with
more recently accessed blocks. If the counter value of the incoming block is higher than
that of the con
icting block currently in cache A, the incoming block replaces this block in
the A cache. This situation indicates that the incoming block is in a macroblock that has
shown more \usefulness" in earlier tours than the macroblock in which the con
icting block
resides, and should thus be given a higher priority for residing in the larger main cache. If
the counter value of the incoming block is less than that of the current resident block, the
incoming block is placed in the smaller B cache.
Finally, if no entry corresponds to the incoming block, the block is placed in the A cache
by default and a new entry is created for it in the MAT, with its counter initialized to zero.
If no entry corresponds to the con
icting block currently in cache A, its counter value is
assumed to be 0, permitting the new block to replace it easily. When no entry is found in
the MAT for a resident block in cache A, another macroblock that maps to the same set
in the MAT must have been accessed more recently, and the current block is therefore less
likely to be used in the near future.
As with the NTS and PCS schemes, there is no direct data path between the A and B
caches. Unlike those schemes, however, the MAT structure is updated for every access to
the cache instead of only on replacements.
4.4 Benchmarks
Table
3 shows the 5 integer and 3
oating point programs from the SPEC95 benchmark
suite used in this study. These programs have varying memory requirements, and the simulations
were done using the training data sets. Each program was run to completion (with
the exception of perl, which was terminated after the rst 1.5 billion instructions).
4.5 The Relative Cache Eects Ratio
An important metric for evaluating any cache management scheme is the cache hit/miss
ratio. However, in OOO processors with multi-ported non-blocking caches, eective memory
latencies (as seen by the processor) vary according to the number of outstanding miss
requests. Since the main focus of this study is to evaluate the eectiveness of the L1 cache
Instruction
Count
Memory References
(millions)
Perfect Memory
Performance
Program (millions) Loads Stores Cycle Count
(millions) IPC
SPEC95 Integer Benchmarks
Compress 35.68 7.37 5.99 5.35 6.6644
Gcc 263.85 61.15 36.24 43.50 6.0648
Go 548.13 115.79 41.40 91.33 6.0049
Perl 1,500.00 396.82 269.83 232.89 6.4408
Floating Point Benchmarks
Hydro2d 974.50 196.11 60.90 127.63 7.6353
Su2cor 1,054.09 262.20 84.74 152.34 6.9192
Table
3: The eight benchmarks and their memory characteristics.
structure using special management techniques, the Relative Cache Eects Ratio (RCR) was
developed [17]. The RCR for a given processor running cache conguration X relative to
cache conguration base, is given by:
CycleCount base CycleCount P erfectCache
(1)
where CycleCount P erfectCache is the total number of cycles needed to execute the same
program on the same processor with a perfect cache conguration. RCR, a normalized
metric between the base cache and the perfect cache, is 1 for the base cache conguration
and 0 for the perfect cache conguration. Cache congurations that perform better than
the base have RCR between 0 and 1, with lower RCR being better. A cache conguration
that performs worse than the base has RCR > 1. RCR gives an indication of the nite
cache penalty reduction obtained when using a given cache conguration. RCR mirrors the
performance indicated by speedup numbers, but isolates the cache penalty cycles from total
run time and rescales them as a fraction of the penalty of a traditional (base) cache. It
thus gives a direct indication of how well the memory subsystem performs relative to an
ideal (perfect) cache. In addition to overall performance speedup, this metric will be used
to measure the relative performance gains of each cache management approach in Section 5.
4.6 The Block Tour and Reuse Concept
The eectiveness of a cache management scheme can also be measured by its ability to
minimize the cumulative number of block tours during a program run. Individual cache
block tours are monitored and classied based on the reuse patterns they exhibit. A tour
that sees a reuse of any word of the block is considered dynamic temporal; a tour that sees
no word reuse is dynamic nontemporal. Both dynamic temporal and dynamic nontemporal
tours can be further classied as either spatial (when more than one word was used) or
nonspatial (when no more than one word was used). This allows us to classify each tour into
one of four data reuse groups: 1) nontemporal nonspatial (NTNS), 2) nontemporal spatial
(NTS), nonspatial (TNS), and management
schemes should result in fewer (longer) tours, and a consequently higher percentage of data
references to blocks making TS tours. NTNS and NTS tours are problematic; more frequent
references to such data are likely to cause more cache pollution. To minimize the impact of
bad tours, a good multi-lateral cache management scheme should utilize an accurate block
behavior prediction mechanism for data allocation decisions.
5 Experimental Results
Miss ratio is often used to rank the performance benets of particular cache schemes.
However, miss ratio is only weakly correlated with the performance of latency-masking processors
with non-blocking caches. Furthermore, it fails to capture the latency-adding eect
of delayed hits on overall performance. Delayed hits, discussed in [22][21], are accesses to
data that are currently returning to the cache on behalf of an earlier miss to that cache block.
Delayed hits incur latencies larger than cache hits, but generally less than a full cache miss,
as the requested data is already in transit from the next level of memory. Two programs
exhibiting similar miss ratios may thus have quite dierent overall execution times due to
diering numbers of delayed hits and the extent of latency masking, as shown in [22].
To avoid oversimplifying a cache scheme's impact on overall performance, we instead
concentrate on two metrics from timing-sensitive experiments: overall speedup relative to a
Compress Gcc Go Hydro2d Li Perl Su2cor Swim
Table
4: Miss ratios of the 6 cache schemes running the 8 benchmarks.
base cache and the Relative Cache Eects Ratio, presented above.
5.1 Miss Ratio
Table
4 shows the miss ratios for each of the six cache congurations when running the
eight benchmarks, and Figure 2 shows the corresponding speedup relative to a 16K direct-mapped
cache. Naively, we might assume that when comparing two congurations for a
particular application, a higher miss ratio would imply a lower speedup, and that (since
cache stalls account for only a portion of the run time) the relative speedup would be less
than the relative miss ratio. However, a comparison of Table 4 and Figure 2 shows that this
assumption is not valid. In compress, for example, the miss ratio of PCS is about 1.01x that
of NTS, but its run time is about 1.04x longer. In gcc, the 32K direct-mapped cache actually
has a higher miss ratio, but less run time than the 16K 2-way associative cache. In swim,
NTS has a higher miss ratio, but less run time than the MAT, 16K and 32K direct-mapped,
and 16K 2-way caches. Thus, relative miss ratio alone is an inadequate indicator of relative
performance; latency masking, miss latency overlap, and delayed hits must be incorporated
in a timing model to get an accurate performance assessment. Therefore, we concentrate
our performance analysis on latency-sensitive metrics, such as speedup and RCR.
5.2 Speedup
The speedup achieved by each scheme for each program is shown in Figure 2, where the
single direct-mapped (16k:1w) cache is taken as the base. Overall, the speedup obtained
by using the multi-lateral cache schemes ranges from virtually none in hydro2d to just over
16% in go with NTS. Clearly, some of the benchmarks tested do not benet from any of
the improvements oered by the cache schemes evaluated, i.e. better management of the L1
2.00%
4.00%
6.00%
8.00%
10.00%
12.00%
14.00%
16.00%
compress gcc go hydro2d li perl su2cor swim
OVER
nts
pcs
Figure
2: Overall execution time speedup for the ve evaluated cache schemes, relative to a single direct-mapped
data store by the multi-lateral schemes, increased associativity of a single cache (16k:2w),
or a larger cache (32k:1w). In benchmarks where there is appreciable performance gain over
the base cache, the multi-lateral schemes often perform as well as or better than either a
higher-associative single cache or a larger direct-mapped cache. In compress and gcc, the
benchmarks' larger working sets benet from the larger overall cache space provided by the
direct-mapped structure, although even for these benchmarks the multi-lateral schemes
are able to obtain a signicant part of the performance boost via their better management
of the cache. Despite their smaller size, the multi-lateral caches generally perform well
compared to the larger direct-mapped cache and are generally faster than the 2-way
associative cache. In the multi-lateral schemes, the larger direct-mapped A cache oers fast
access, and the smaller, more associative B cache can still be accessed quickly due to its
small size. Our experiments show that using an 8-way associative B cache instead of a fully
associative B cache would reduce performance by less than 1%.
Among the multi-lateral schemes, we see that the NTS scheme provides the greatest
speedup in all benchmarks except for li (where MAT performs best), su2cor, and swim
(where PCS performs best), but the best multi-lateral cache speedups are only on the order
of 1% for these three benchmarks. Both the MAT and PCS schemes can perform well
when groups of blocks exhibit similar reuse behavior on consecutive tours through the cache.
The NTS scheme may, however, fail to detect these reuse patterns because it correlates its
reuse information to individual cache blocks, as opposed to macroblock memory regions in
MAT or to memory accessing instructions in PCS. Thus, the MAT and PCS schemes can
perform well if programs exhibit SIMD (single-instruction, multiple-data) behavior, where
the reference behavior of nearby memory blocks or blocks referenced by the same memory
accessing instruction may be a better indicator of reuse behavior than the usage of an
individual block during its last tour. However, the NTS scheme is still competitive in these
three benchmarks, and thus gives the best overall performance of these schemes over the full
suite of benchmarks.
5.3 RCR Performance
Figure
3 shows the RCR performance of MAT, NTS, PCS, and the two single-structure
caches, where the 16K direct-mapped cache serves as the base for comparison. We see
here that NTS and PCS eliminate more than 50% of the nite cache penalty experienced
by go and perl. In compress, gcc, and li, the 32K single-structure direct-mapped cache
performs best. However, the dierence in RCR between it and the best-performing multi-lateral
scheme is not very large, except for li, where it reduces the nite cache penalty
more than twice as much as MAT, the best multi-lateral scheme for this benchmark. None
of the caches show a signicant improvement in RCR for the remaining three benchmarks
(hydro2d, su2cor, and swim).
In some instances, a multi-lateral scheme can experience poor performance, e.g. MAT
in the perl benchmark, relative to the other multi-lateral schemes. The block allocation
scheme of MAT is not well matched to the characteristics of the perl benchmark. If many of
the blocks are short-lived, but frequently accessed, those blocks will be placed in the smaller,
fully-associative cache. If this behavior continues, many blocks will contend for space
in the smaller 2K fully associative cache while the larger, 16K cache is badly underutilized.
Such a phenomenon can occur in any of these multi-lateral cache schemes; in the extreme,
the multi-lateral scheme's performance may degrade to that of the B cache by itself. This
0.100.300.500.700.90compress gcc go hydro2d li perl su2cor swim
RCR
nts
pcs
Figure
3: RCR performance of the evaluated congurations running the eight benchmarks. RCRs near
1.0 have performance similar to the base 16K direct-mapped cache while RCRs closer to 0 approach the
performance of a perfect cache.
performance degradation can be addressed by improved block allocation mechanisms, as
discussed in Section 6.
5.4 Performance Dierences and Their Causes
Though MAT and NTS both make block allocation decisions based on the eective address
of the block being accessed, their performance diers. MAT may make poor allocation
decisions when either the missed block or the block it would replace in cache A has a
markedly dierent desirability than the perceived desirability of the macroblock in which it
resides. When such a disparity in desirability occurs, a block-based desirability mechanism,
such as that used in NTS, will perform better. Table 5 presents a tour analysis for go and
su2cor. The performance of NTS relative to MAT is well demonstrated in go. Not only does
NTS appear to manage the tours in this program better, but it actually reduces the number
of tours that MAT experiences by 32%. In the case of su2cor, where the performance
dierence between MAT and NTS is small (in terms of RCR), it is clear that the application
itself has much more nontemporal spatial data (> 17%), and neither scheme reduces the
tours seen by a 16KB direct-mapped cache by more than 6%.
The tour analysis for the two single-structure caches is also shown for these two widely
disparate benchmarks. The analyses used to compare MAT and NTS can also be used to
compare multi-lateral caches against single-structure caches. In go, the 32K direct-mapped
Cache
Management
Scheme
Total # of
Tours
Reduction
in Tours
Total Percentage References to Tour Groups
Cache
Management
Scheme
Total # of
Tours
Reduction
in Tours
Total Percentage References to Tour Groups
Table
5: Tour analysis for Go (top) and Su2cor (bottom).
cache has the best single-structure cache performance; though it has a slightly lower percentage
of TS tours than the NTS cache, it has substantially more overall tours, and thus
worse overall performance. As with MAT and NTS in su2cor, the 2-way associative and
direct-mapped caches fail to signicantly improve performance over the base cache due
to the high percentage of NTS data accessed.
NTS performs better than PCS overall in both speedup and RCR. The dierent basis
for decision making used by PCS and NTS (PC and eective address, respectively), results
in dierent performance. The PCS scheme may place a block suboptimally in the cache
since its placement is in
uenced by other blocks previously referenced by the requesting
PC. For example, a set of blocks may be brought into the cache by one instruction at the
beginning of a large routine. These blocks may be reused in dierent ways during dierent
parts of program execution (e.g. temporal during an initialization phase and nontemporal
during the main program's execution). All of these blocks' usage characteristics may be
attributed to a single entry in the DU, tied to the PC of the instruction that brought the
blocks to the cache. When each of these tours end, the instruction's entry in the DU is
updated with that particular tour's behavior, directly aecting the placement of the next
block requested by this instruction. In eect, the allocation decisions of PCS are in
uenced
by the most recently replaced block that is associated with the load instruction in question;
if the characteristics of those most recently replaced blocks is not persistent, as discussed in
Section 6.3.4, the allocation decisions made by PCS for this load instruction will vary often,
potentially degrading performance.
However, program counter management schemes may be good if a given instruction loads
data whose usage is strongly biased [14] in one direction, i.e. if these tours are almost all
temporal or almost all nontemporal. In this case, accurate behavior predictions for future
tours will result in good block placement for that instruction. However, if the data blocks
loaded by the instruction have diering usage characteristics (i.e. weakly biased [14]) then
placement decisions of its blocks will be poor.
Block usage history (T/NT) is kept in a single bit (NTS and PCS); macroblock access
frequency is kept in an n-bit counter (MAT). Reducing the counter size in MAT generally
leads to decreased performance [10]. However, keeping tour history using a 2- or 3-bit counter
in NTS and PCS showed virtually no performance benet over the 1-bit scheme.
6 Using Reuse Information in Data Cache Management
Each of the multi-lateral schemes operates on the assumption that reuse information is
useful in actively managing the cache. In this section, we assess the value of reuse information
for making placement decisions in multi-lateral L1 cache structures. We rst examine optimal
cache structures to determine how they exploit reuse information in cache management. We
then outline the experiments performed to validate the use of reuse information and compare
the performance of multi-lateral schemes to the performance of near-optimally managed
caches of the same size.
6.1 Optimally and Near-Optimally Managed Caches
Belady's MIN [2] is an optimal replacement algorithm for a single-structure cache, i.e. it
results in the fewest misses 5 . While it is interesting to see how reuse information is used
5 Note that we do not consider timing models in this section, for which MIN is not an optimal algorithm.
to manage a single cache, we are interested in determining how an optimal replacement
algorithm for a multi-lateral cache makes replacement decisions, and how reuse information
might best be exploited. However, no direct extension of MIN to multi-lateral caches is
known. The only exception is where both the A and B caches of a multi-lateral conguration
are fully associative and blocks are free to move between the two caches as necessary in order
to retain the most useful blocks in the cache structure; this multi-lateral cache is, however,
degenerate as it reduces to a single fully associative cache of size equal to the total of cache
A plus cache B. In this case, MIN can be used to optimally manage the hardware-partitioned
fully associative single cache.
We refer to Belady's MIN algorithm, when applied to the dual-fully associative caches, as
opt. While opt gives an upper bound on the performance of a multi-lateral cache of a given
size and associativity, comparing opt to the implementable schemes does not yield a direct
comparison of replacement decisions based on reuse information. Since multi-lateral caches
typically have caches A and B of diering associativity, the performance dierence between
the implementable schemes and opt may be due not only to replacement decisions, but also to
mapping restrictions placed on the implementable schemes by their limited associativity. We
would instead like to compare the performance of the implementable schemes to an optimally
managed multi-lateral conguration where the A and B caches are of diering associativity
in order to better attribute the dierences in placement and replacement decisions to the
management policy itself, rather than to the associativity of the conguration.
Pseudo-opt [20] is a multi-lateral cache management scheme for congurations where the
associativity of A is no greater than that of B. As in the conguration for opt, free movement
of blocks between A and B is allowed in this scheme, provided that the contents of A and B
are disjoint. Management is adapted from Belady's MIN algorithm, as follows. On a miss,
the incoming block lls an empty slot in the corresponding set of cache A or cache B, if one
exists. If no such empty slot exists, then for each set of cache A, an extended set of blocks
is dened, consisting of all blocks in cache A and cache B that map to this set in cache A.
Reference b c F D b E D F c
Hit/Miss
Figure
4: An example showing why pseudo-opt is suboptimal.
For each extended set that includes any block currently resident in cache B, i.e. sets whose
extended set is larger than the associativity of cache A, block
of the extended set whose
next reference is farthest in the future is found. If block
does not currently reside in cache
B, it is swapped with one of the cache B blocks of this extended set. The incoming block is
placed in cache A and the block in that set next referenced farthest in the future is moved
to cache B, overlling it by one block. The cache B block next referenced farthest in the
future is then replaced. This choice is not optimal in all cases, as illustrated by the following
example.
For the reference pattern in Figure 4, consider a design with a direct-mapped cache A of
size 2 blocks (2 sets) and a one block cache B. The references shown in the gure are block
addresses. Upper case letters map to set 0 and lower case letters map to set 1 of cache A.
The gure shows the contents of set 0 and set 1 in cache A and cache B after each memory
access. Using pseudo-opt we incur 7 misses. The rst 3 compulsory misses ll empty blocks.
replaces c at time 4 since c is next referenced further in the future than F or b. At time
replace b and replaces F rather than D. Finally, F and c miss. However, the
minimum possible number of misses is 6. This can be achieved by replacing F (in set 0 of
cache instead of c (in cache B) at time 4. E then replaces b at time 6 after swapping b
and c, and nally F misses, but c hits.
In cases where the A and B caches are fully associative, pseudo-opt reduces to opt. Although
pseudo-opt is also not an implementable policy, its performance, as seen in Table 6, is
close to that of opt. Furthermore, pseudo-opt's performance is much better than the implementable
multi-lateral schemes' performance. Most of the performance dierence between
the implementable schemes and opt is thus due to non-optimal allocation and replacement
decisions; only a small portion of the performance dierence (no more than the dierence
between opt and pseudo-opt) is due to the restricted associativity of the implementable
schemes. We therefore use pseudo-opt for comparison to the implementable schemes in order
to eliminate the associativity dierence from the evaluations and give a better idea of
the realizable performance of a limited-associativity multi-lateral cache. The dierences in
placement and replacement decisions seen in the implementable schemes provide insights
into their performance relative to a near-optimal scheme.
6.2 Simulation Environment
To evaluate the opt and pseudo-opt schemes, we collected memory reference traces generated
from the SimpleScalar processor environment [3]. Each trace entry contains the
(eective) address accessed, the type of access (load or store), and the program counter of
the instruction responsible for the access.
We skipped the rst 100 million instructions (to avoid initialization eects) and analyzed
the subsequent 25 million memory references. We limited the number of memory references
evaluated due to the space and processing time required to perform the opt and pseudo-opt
cache evaluations. These experiments use ve SPEC95 integer benchmarks { compress,
gcc, go, li, and perl, since sampling such a small portion of a
oating point program
will likely generate references that form part of a regular loop, resulting in very low miss
rates. However, the sampled traces for the integer programs do reasonably mirror the actual
memory reference behavior of the complete program execution, as shown in Section 5.
The traces were annotated to include the information necessary to perform the opt and
pseudo-opt replacement decisions and counters for many useful statistics. These included
the outcome of an access (hit or miss) as it would have occurred in an opt/pseudo-opt
conguration, the usage information for each block tour (as seen for each scheme), and the
number of blocks in each reuse category that are in the cache at each instant in time (i.e. the
number of NTNS, NTS, TNS, and TS blocks resident in the cache). From these statistics,
we gathered information regarding the performance of the opt and pseudo-opt management
schemes and compared the performance of the implementable schemes to that of the optimal
schemes on an access-by-access basis.
Due to the smaller size of the input sets in this section, compared to the full program
executions performed in Section 4, we chose a direct-mapped, 8KB A cache and a fully as-
sociative, 1KB B cache, each with a 32B blocksize, for the pseudo-opt conguration. The
opt conguration is simply a 9K fully associative cache. Using the larger, (16+2)K caches
of Section 4 for these evaluations would not have been useful in determining each scheme's
performance, as the SPEC benchmarks already have small to moderately sized working sets
[5]; these relatively short traces would show little performance benet from active management
in a large cache, whereas the benets of active management are highlighted when using
smaller caches. The mlcache simulator used in this section only deals with the cache and
memory, not the processor. Without processor eects, timing is of little signicance and
mlcache is used here only as a behavioral-level simulator.
6.3 Results
6.3.1 Analysis of opt vs. pseudo-opt
We analyzed the annotated traces produced from the opt and pseudo-opt runs to determine
their relative performance based on miss ratio measurement and block usage information.
In particular, we counted the number of tours that show each reuse pattern, the number of
each type of block resident in the cache at any given instant, and how often a block with a
prior reuse characteristic changes its usage pattern in a subsequent tour.
6.3.2 Miss Ratio
As the miss ratios in Table 6 show, the performance of pseudo-opt is relatively close to
that of opt, except for go. Note that miss ratio in these experiments is a straightforward
performance metric, as the simulations are behavioral and do not include access latencies
or processor latency-masking eects. In go, the performance disparity between opt and
pseudo-opt is due to the limitations on the associativity of the A cache, and possibly also
Compress Gcc Go Li Perl
pseudo-opt
Table
Miss ratios for the ve (8+1)KB and the 16KB cache congurations on the trace inputs. PONS
is the pseudo-opt no-swap scheme.
to the suboptimal replacement policy of pseudo-opt. On each replacement, any number of
swaps can be done in opt to rearrange the cache contents so that the least desired cache
block is replaced. However, in pseudo-opt, the movement choices are limited by the mapping
requirements of the direct-mapped A cache { at most one block, mapping to the set in B
that is associated with the incoming block, need be swapped on each replacement. Despite
its more limited choice of blocks to replace, the performance of pseudo-opt is still very close
to opt except for go, and the dierence in performance between pseudo-opt and opt is always
much smaller than the dierence between the implementable congurations and pseudo-opt.
The actual performance of the implementable schemes is discussed in Section 6.4.
In addition to the advantage of future knowledge, pseudo-opt diers from the implementable
schemes by freely allowing blocks to move between the A and B caches in order to
obtain the best block for replacement. However, this movement actually accounts for very
little of the performance dierence seen between pseudo-opt and the implementable schemes.
To verify this claim, we created a version of pseudo-opt, called PONS (pseudo-opt no-
swap), that disallows data movement between the caches. The management scheme used by
PONS is the same as for pseudo-opt, except that no blocks are swapped between cache A
and B at any time. The incoming block replaces the block in its A or B set that is next
referenced farthest in the future. In PONS, it is thus possible that a replaced block in cache
B is referenced sooner in the future than a block in some other set in A that extends into
that set of cache B and could have been replaced if swaps were allowed.
However, we see in Table 6 that the miss ratios of pseudo-opt and PONS are actually
compress:opt compress:po gcc:opt gcc:po go:opt go:po li:opt li:po perl:opt perl:po
Benchmark:Scheme
Blocks
in
cache
(by
ts
tns
nts
ntns
Figure
5: Dynamic cache block occupation in opt and pseudo-opt (denoted po, grouped into NTNS, NTS,
TNS, and TS usage patterns.
very close, indicating that the omission of inter-cache data movement has a small eect on
near-optimum performance. In particular, the performance dierence between pseudo-opt
and PONS is much smaller than the dierence between pseudo-opt and the implementable
schemes, showing that the major advantage of pseudo-opt comes from its management using
future knowledge as opposed to its ability to move blocks between caches. Since some multi-lateral
schemes do allow data movement between the A and B caches, we use pseudo-opt as
the basis for comparison to the implementable multi-lateral cache schemes.
6.3.3 Cache Block Locality Analysis
We examined the locality of cache blocks that are resident in the cache structure at any
given time by counting the number of blocks in each category at the time of each miss and
taking an average over the duration of the program. The locality of the cache blocks for the
opt and pseudo-opt congurations is shown in Figure 5.
For opt and pseudo-opt managed caches, as expected, data that exhibits temporal reuse
occupies a large portion of the cache space; nontemporal data occupies at most 23% of the
cache (compress under pseudo-opt). Furthermore, the vast majority of the blocks in L1
are both temporal and spatial. Though the blocks are relatively small in size (32 bytes), we
nd that spatial reuse can be exploited well if the cache is managed properly. As expected,
the pseudo-opt conguration (vs. the opt conguration) generally holds fewer TS blocks in
compress gcc go li perl compress gcc go li perl
TOUR
opt
pseudo-opt
Persistence T Persistence
Figure
usage persistence within opt and pseudo-opt.
the cache, due to the pseudo-opt conguration's limited placement options and sub-optimal
replacement decisions. However, perl is an exception to this observation. In both opt and
pseudo-opt, all of perl's data in the cache is spatial (TS or NTS), as seen in Figure 5;
keeping an NTS block in cache long enough so that it obtains TS status, as done by pseudo-
opt, slightly increases the miss ratio of the perl benchmark above that of opt, indicating
that maximizing the number of TS blocks in cache is not always the best policy.
6.3.4 Block Usage Persistence
While the presence of certain types of blocks in the cache shows the potential benet of
managing the cache with reuse information, management based on this information is not
straightforward. Blocks can have dierent usage characteristics during dierent portions
of program execution, making block usage hard to predict. The prediction of a particular
block's usage pattern is similar to the branch prediction problem. However, branch outcomes
are easier to predict than optimal block usage characteristics for aiding placement decisions.
To assess the value of reuse information, we examined the persistence of cache block behavior
in the opt and pseudo-opt schemes, i.e. once a block exhibits a given usage characteristic
in a current tour, how likely is it to maintain that characteristic in its next tour? If there
is a high correlation between past and future use (i.e. the block's usage characteristic is
persistent), prediction of future usage behavior will be easier. Block persistence in terms of
same and not-same is therefore analogous to the same-direction terminology used in branch
prediction studies [14] to predict the path a specic branch will take given behavior history.
Instead of determining block persistence in terms of the four usage patterns NTNS, NTS,
TNS, and TS, we decided to examine only the persistence of T and NT patterns. This
coarser granularity grouping is more directly relevant to our placement decisions for 2-unit
cache structures.
Figure
6 presents data for block usage persistence in successive tours. In general, blocks
that exhibit NT usage behavior in prior tours have a strong likelihood of exhibiting NT
behavior again in future tours. In the opt scheme, this likelihood ranges from 63% (perl)
to 95% (compress) for the evaluated benchmarks. However, the pseudo-opt scheme shows
somewhat less persistence, ranging from 57% (go) to 87% (compress) for NT blocks. Over-
all, the persistence of NT blocks in both opt and pseudo-opt schemes is well over 50%. Blocks
that exhibit T usage behavior are less persistent, and thus less predictable. T block persistence
in the opt scheme ranges from 40% (go) to 97% (li), and is similar for pseudo-opt:
45% (compress) to 92% (li).
li skews these numbers somewhat; regardless of the type of usage characteristics that a
block exhibited in a tour, its next tour is highly likely to exhibit temporal reuse. While temporal
blocks are persistent in li, they are much less persistent for the other three benchmarks.
As compress, gcc, go, and perl represent a wider range of program execution than li alone,
we see that future tours of temporal blocks are harder to predict than nontemporal blocks,
i.e. temporally tagged blocks are only weakly biased toward exhibiting T usage patterns in
their next tour.
6.4 Multi-Lateral Scheme Performance
Given the performance and reuse information of the opt and pseudo-opt congurations, we
can determine how the implementable schemes perform as a result. We restrict our evaluation
of the implementable schemes to NTS and PCS, the two multi-lateral congurations that
actively place data within L1 based on individual block reuse information.
6.4.1 Miss Ratio Performance of NTS and PCS
To compare with the pseudo-opt conguration used, we congured the NTS and PCS
structures to have an 8KB direct-mapped A cache, a 1KB fully associative, LRU-managed
cache, 32B block size, and a 32-entry detection unit (DU).
The miss ratio performance of the two congurations is shown in Table 6 along with the
performance of opt, pseudo-opt, PONS, and a direct-mapped 16KB single structure cache. In
concurrence with results of earlier analyses of these multi-lateral congurations [16][17][22],
the NTS and PCS caches each perform about as well as a direct-mapped cache of nearly
twice the size. However, as the performance of pseudo-opt indicates, there is still room for
further improvement.
6.4.2 Prediction Accuracy
NTS makes placement decisions based on a given block's usage during its most recent past
if the block exhibited nontemporal reuse, it is placed in the smaller B cache for its next
tour, otherwise it is placed in the A cache. PCS makes placement decisions based on the
reuse of blocks loaded by the memory accessing instruction. If the most recently replaced
block loaded by a particular PC exhibited nontemporal reuse, the next block loaded by that
PC is predicted to do the same and is placed in the smaller B cache, otherwise it is placed
in the larger A cache. In both schemes, accessed blocks with no matching entry in the DU
are placed in the A cache by default.
The accuracy of these predictions can be determined based on the actual usage of the
blocks in the pseudo-opt scheme. For instance, a prediction of T behavior for a block is
classied as correct if the actual usage of that block in the pseudo-opt scheme is T. As noted
in Section 6.2, the annotated traces fed to our trace-driven cache simulator, a modied version
of mlcache, contain the actual block usage information for the tours seen in the pseudo-opt
scheme. The simulator provides information on the selected scheme's block prediction and
actual usage accuracy. Given this information, it is easy to determine how well each of the
congurations predict a block's usage, and consequently, whether it is properly placed within
Tour prediction accuracy Actual usage accuracy
NTS PCS NTS PCS
Compress T 0.240 0.210 0.402 0.412
Total
Total 0.527 0.595 0.696 0.675
Go T 0.588 0.621 0.646 0.652
Total 0.561 0.596 0.624 0.609
Total 0.786 0.723 0.787 0.758
Perl T 0.326 0.382 0.727 0.738
Total 0.327 0.510 0.861 0.875
Table
7: Tour prediction and actual usage accuracy for NTS and PCS, broken into NT, T, and overall
(total) accuracy. Accuracies are relative to actual block usage in pseudo-opt.
the L1 cache structure.
Table
7 shows the prediction accuracy of NTS and PCS for the benchmarks examined.
In general, ignoring li due to its excessively high temporal reuse, the prediction accuracy is
relatively low, ranging from 25% (compress with NTS) to 60% (go with PCS), despite the
larger granularity of block typing (two categories, T and NT, rather than the complete four
category breakdown). Both the NTS and PCS schemes show poor prediction accuracies,
directly impacting their block allocation decisions and their resulting overall performance.
Improved block usage prediction for these schemes may result in better block placement and
higher performance.
6.4.3 Actual Usage Accuracy
Regardless of the prediction accuracy, a given block will exhibit reuse characteristics based
on the duration and time of its tour through the cache. We examined the actual usage of
each block as it was evicted from each of the caches in the implementable schemes to see how
that usage compared to the same blocks usage in the pseudo-opt scheme. This comparison
sheds some light on the eect of eliminating the movement of blocks between the caches
during an L1 tour.
Compress Gcc Go Li Perl
pseudo-opt 1293.3 26695 1731.2 25946 1177.8 22297.3 4400.4 34221.5 957 54749
NTS 4.1 44.1 2.0 52.0 1.4 26.1 3.3 66.3 7.7 130.4
PCS 3.7 48.9 2.1 49.8 1.3 24.1 3.2 66.1 7.9 134.0
8K:1w 3.6 35.4 2.0 35.7 1.2 14.0 2.6 48.0 3.7 35.9
Table
8: Average tour lengths for pseudo-opt, NTS, PCS, and a direct-mapped single cache. Tour lengths
measure the number of accesses handled while the block is in cache. Tours are broken into NT and T.
We see in Table 7 that relative to the low prediction accuracy we saw in Section 6.4.2, the
actual usage of the blocks in the cache is closer to the actual usage of the blocks in pseudo-
opt. Thus, despite our placement decisions, some blocks still exhibited reuse behavior akin
to that seen in the pseudo-opt conguration. Furthermore, although NTS exhibited lower
tour prediction accuracy than PCS (in all the benchmarks except for li), it exhibited higher
actual usage accuracy (in all the benchmarks except for perl).
6.4.4 Tour Lengths
While these accuracies are interesting, the block tour lengths in the implementable schemes
are not directly related to the tour lengths in the pseudo-opt scheme, as pseudo-opt has a
more elaborate replacement policy. Disparate tour lengths can aect these comparisons
in two ways. First, for tours that are shorter in the pseudo-opt conguration than in the
implementable schemes, the implementable schemes may keep a block in L1 longer than
necessary and permit it to exhibit seemingly more benecial usage patterns (i.e. TNS or
TS). While this makes a particular block seem more useful, the longer presence of that block
in L1 in the implementable schemes may unduly shorten the tours of a signicant number
of other blocks, leading to their misclassication and precluding their optimal placement.
Conversely, where tours are longer in pseudo-opt than in the implementable schemes, the
corresponding blocks in the implementable schemes may be replaced before they can exhibit
their optimal usage characteristics, causing poor usage accuracy, poor prediction accuracy,
and poor placement into the small B cache, and hence shorter tour lengths.
Table
8 shows the average tour length for blocks showing T and NT usage characteristics.
As expected, NT tours are much shorter than T tours, and tours in pseudo-opt are on
average much longer than tours in either NTS or PCS, due to its future knowledge and
greater
exibility in management of data once it has entered the L1 cache structure. In NTS
and PCS, once a block is placed in L1, it remains in the same cache until it is replaced, and
is thus subject to the replacement policy inherent in the corresponding cache. For instance,
if block is deemed to be T in either NTS or PCS, it is placed in the direct-mapped A
cache. If a subsequent block maps to the same set in A and is also marked T, the earlier
will be evicted, possibly before it can exhibit its optimum usage characteristics.
In pseudo-opt, if it is deemed desirable at that time, block would simply be moved to
the B cache. As a result, the tour length of block would tend to be much longer under
pseudo-opt than under NTS or PCS.
We see that there is a clear gap between the average tour length of blocks in pseudo-opt
vs. the implementable schemes. This dierence in tour lengths is, however, much larger
than the dierence in miss ratio for the implementable vs. pseudo-opt schemes. The performance
dierence is much smaller because the average tour length in pseudo-opt has increased
greatly, but the tour lengths of con
icting blocks may not be helped as much by the better
management scheme. Some blocks may be chosen to remain in cache for nearly the entire
program's execution, as they are accessed regularly (though not necessarily frequently), and
would thus have very long tour lengths. These long-lived blocks greatly increase the average
tour lengths seen for each benchmark, though they may only reduce the overall number of
misses by a small amount.
From this study we see that by improving upon the prediction of the usage characteristics
of a block and the management of those blocks once they are placed in the L1 cache structure,
we might improve the performance of the NTS and PCS schemes to reduce the performance
gap that exists between these schemes and pseudo-opt. We see from Table 8 that each of
the multi-lateral schemes does increase the tour lengths relative to the single, direct-mapped
cache structure of nearly the same size, indicating that these schemes are making decisions
that improve data usage and performance relative to a conventional cache.
Conclusions
In this paper we have evaluated three dierent implementable methodologies (MAT, NTS,
and PCS) for managing an on-chip data cache based on active block allocation via capturing
and exploiting reuse information. In general, an actively managed cache structure signi-
cantly improves upon the performance of a traditional, passively-managed cache structure
of similar size and competes with one of nearly twice the size. Further, the individual eec-
tive address reuse history scheme used in NTS generally gives better performance than the
macroblock eective address-based MAT or the PC-based PCS approaches.
We compared the performance of the PCS and NTS schemes to the performance of a
near-optimally managed cache structure (pseudo-opt). The dierence in performance, block
usage prediction, actual block usage, and tour lengths between the implementable schemes
and pseudo-opt shows much room for improvement for these actively-managed caches.
Thus, multi-lateral cache structures that actively place data within the cache show promise
of improving cache space usage. However, the prediction strategies used in these current
schemes are too simple. Improving the prediction algorithms, as well as actively managing
blocks once they are placed in the L1 cache structure (active replacement), can help improve
the performance of the implementable schemes and may enable them to approach optimal
cache space usage.
Acknowledgments
This research was supported by the National Science Foundation grant MIP 9734023 and
by a gift from IBM. The simulation facility was provided through an Intel Technology for
Education 2000 grant.
--R
A study of replacement algorithms for a virtual storage computer.
Evaluating future microprocessors: the simplescalar tool set.
Prefetching and memory system behavior of the spec95 benchmark suite.
Reducing memory latency via non-blocking and prefetching caches
Improving cache performance by selective cache bypass.
Data cache with multiple caching strategies tuned to di
documentation.
Improving direct-mapped cache performance by the addition of a small fully associative cache and prefetch buers
Reducing con icts in direct-mapped caches with a temporality-based design
Utilizing reuse information in data cache management.
Improving performance of an l1 cache with an associated bu
Early design cycle timing simulation of caches.
mlcache: A exible multi-lateral cache simulator
--TR
--CTR
Prabhat Jain , Srinivas Devadas , Daniel Engels , Larry Rudolph, Software-assisted cache replacement mechanisms for embedded systems, Proceedings of the 2001 IEEE/ACM international conference on Computer-aided design, November 04-08, 2001, San Jose, California
Salvador Petit , Julio Sahuquillo , Jose M. Such , David Kaeli, Exploiting temporal locality in drowsy cache policies, Proceedings of the 2nd conference on Computing frontiers, May 04-06, 2005, Ischia, Italy
Mark Brehob , Richard Enbody , Eric Torng , Stephen Wagner, On-line restricted caching, Journal of Scheduling, v.6 n.2, p.149-166, March/April
Mark Brehob , Stephen Wagner , Eric Torng , Richard Enbody, Optimal Replacement Is NP-Hardfor Nonstandard Caches, IEEE Transactions on Computers, v.53 n.1, p.73-76, January 2004
Mark Brehob , Richard Enbody , Eric Torng , Stephen Wagner, On-line restricted caching, Proceedings of the twelfth annual ACM-SIAM symposium on Discrete algorithms, p.374-383, January 07-09, 2001, Washington, D.C., United States
Wang , Nelson Passos, Improving cache hit ratio by extended referencing cache lines, Journal of Computing Sciences in Colleges, v.18 n.4, p.118-123, April
Martin Kampe , Per Stenstrom , Michel Dubois, Self-correcting LRU replacement policies, Proceedings of the 1st conference on Computing frontiers, April 14-16, 2004, Ischia, Italy
J. Sahuquillo , S. Petit , A. Pont , V. Milutinovi, Exploring the performance of split data cache schemes on superscalar processors and symmetric multiprocessors, Journal of Systems Architecture: the EUROMICRO Journal, v.51 n.8, p.451-469, August 2005
Zhigang Hu , Stefanos Kaxiras , Margaret Martonosi, Timekeeping in the memory system: predicting and optimizing memory behavior, ACM SIGARCH Computer Architecture News, v.30 n.2, May 2002
Youfeng Wu , Ryan Rakvic , Li-Ling Chen , Chyi-Chang Miao , George Chrysos , Jesse Fang, Compiler managed micro-cache bypassing for high performance EPIC processors, Proceedings of the 35th annual ACM/IEEE international symposium on Microarchitecture, November 18-22, 2002, Istanbul, Turkey | reuse information;active management;multilateral cache |
324139 | Purely functional, real-time deques with catenation. | We describe an efficient, purely functional implementation of deques with catenation. In addition to being an intriguing problem in its own right, finding a purely functional implementation of catenable deques is required to add certain sophisticated programming constructs to functional programming languages. Our solution has a worst-case running time of O(1) for each push, pop, inject, eject and catenation. The best previously known solution has an O(log*k) time bound for the kth deque operation. Our solution is not only faster but simpler. A key idea used in our result is an algorithmic technique related to the redundant digital representations used to avoid carry propagation in binary counting. | Introduction
A persistent data structure is one in which a change to the structure can be made without destroying
the old version, so that all versions of the structure persist and can at least be accessed (the structure
is said to be partially persistent) or even modified (the structure is said to be fully persistent). In the
functional programming literature, fully persistent structures are often called immutable. Purely
AT&T Laboratories, Florham Park, NJ. Some work done at Princeton University, supported by the Office of
Naval Research, Contract No. N00014-91-J-1463, the NSF, Grants No. CCR-8920505 and CCR-9626862, and a United
States-Israel Educational Foundation (USIEF) Fulbright Grant. hkl@research.att.com.
y Department of Computer Science, Princeton University, Princeton, NJ 08544 USA and InterTrust Technologies,
Sunnyvale, CA. Research at Princeton University partially supported by the NSF, Grants No. CCR-8920505 and
CCR-9626862, and the Office of Naval Research, Contract No. N00014-91-J-1463. ret@cs.princeton.edu.
functional 1 programming, without side effects, has the property that every structure created is
automatically fully-persistent. Persistent data structures arise not only in functional programming
but also in text, program, and file editing and maintenance; computational geometry; and other
algorithmic application areas. (See [6, 10, 11, 12, 13, 14, 15, 16, 24, 37, 38, 39, 40, 41].)
A number of papers have discussed ways of making specific data structures, such as search
trees, persistent. A smaller number have proposed methods for adding persistence to general data
structures without incurring the huge time and space costs of the obvious method, which is to
copy the entire structure whenever a change is made. In particular, Driscoll, Sarnak, Sleator, and
Tarjan [14] described how to make pointer-based structures persistent using a technique called
node-splitting, which is related to fractional cascading [7] in a way that is not yet fully understood.
Dietz [11] described a method for making array-based structures persistent. Additional references
on persistence can be found in the Driscoll et al. and Dietz papers.
These general techniques fail to work on data structures that can be combined with each other
rather than just be changed locally. Driscoll, Sleator and Tarjan [13] coined the term "confluently
persistent" to refer to a persistent structure in which some update operations can combine two
different versions. Perhaps the simplest and probably the most important example of combining
data structures is catenation (appending) of lists. Confluently persistent lists with catenation are
surprisingly powerful. For example, by using self-catenation one can build a list of exponential size
in linear time.
This paper deals with the problem of making persistent list catenation efficient. We consider
the following operations for manipulating lists:
return a new list consisting of the singleton element x.
return the list that is formed by adding element x to the front of list L.
pop(L): return the pair consisting of the first element of list L and the list consisting of the second
through last elements of L.For the purposes of this paper, a "purely functional" data structure is one built using only the LISP functions car,
cons, cdr. Though we do not state our constructions explicitly in terms of these functions, it is routine to verify that
our structures are purely functional. Our definition of purely functional is extremely strict; we do not, for example,
allow techniques such as memoization. This contrasts our work with, for example, that of Okasaki [33, 34, 35, 36].
For more discussion of this issue, see Sections 2 and 7.
return the list that is formed by adding element x to the back of list L.
return the pair consisting of the last element on list L and the list consisting of the first
through next-to-last elements of L.
catenate(K; L): return the list formed by catenating K and L, with K first.
Observe that push and inject are special cases of catenate. It will be convenient for us to treat them
as separate operations, however. In accordance with convention, we call a list subject only to push
and pop (or inject and eject) a stack and a list subject only to inject and pop (or push and eject) a
queue. Adopting the terminology of Knuth [29], we call a list subject to all four operations push,
pop, inject, and eject a double-ended queue, abbreviated deque (pronounced "deck"). In a departure
from existing terminology, we call a list subject only to push, pop, and inject a stack-ended queue,
or steque (pronounced "steck"). Knuth called steques output-restricted deques, but "stack-ended
queue" is both easy to shorten and evokes the idea that a steque combines the functionalities
of a stack and a queue. Steques with catenation are the same as stacks with catenation, since
catenation makes inject (and push, for that matter) redundant. We call a data structure with
constant worst-case time bounds for all operations a real-time structure.
Our main result is a real-time, purely functional (and hence confluently persistent) implementation
of deques with catenation. Our data structure is both more efficient and simpler than
previously proposed structures [4, 13]. In addition to being an interesting problem in its own right,
our data structure provides a way to add fast catenation to list-based programming languages
such as scheme, and to implement sophisticated programming constructs based on continuations
in functional programming languages. See [15, 16]. A key ingredient in our result is an algorithmic
technique related to the redundant digital representations devised to avoid carry propagation in
binary counting.
The remainder of this paper consists of six sections. Section 2 surveys previous work dealing
with problems related to that of making lists persistent and adding catenation as an efficient list
operation. Section 3 motivates our approach. Section 4 describes how to make deques without
catenation purely functional, thereby illustrating our ideas in a simple setting. Section 5 describes
how to make stacks (or steques) with catenation purely functional, illustrating the additional ideas
needed to handle catenation in the comparatively simple setting of stacks. Section 6 presents our
most general result, an implementation of deques with catenation. This result uses an additional
idea needed to handle an underlying tree-like recursive structure in place of a linear structure.
Section 7 mentions additional related results and open problems.
A preliminary version of part of our work was presented at the 27 th Annual ACM Symposium
on Theory of Computing [27].
Previous Work
Work related to ours can be found in three branches of computer science: data structures; functional
programming; and, perhaps surprisingly, Turing machine complexity. We shall describe this work
approximately in chronological order and in some detail, in an attempt to sort out a somewhat
tangled history.
Let us put aside catenation for the moment and consider the problem of making noncatenable
lists fully persistent. It is easy to make stacks persistent: we represent a stack by a pointer to a
singly-linked list of its elements, the top element on the stack being the first element on the list.
To push an element onto a stack, we create a new node containing the new element and a pointer
to the node containing the previously first element on the stack. To pop a stack, we retrieve the
first element and a pointer to the node containing the previously second element. This is just the
standard LISP representation of a list.
A collection of persistent stacks represented in this way consists of a collection of trees, with
a pointer from each child to its parent. Two stacks with common suffixes can share one list
representing the common suffix. (Having common suffixes does not guarantee this sharing, however,
since two stacks identical in content can be built by two separate sequences of push and pop
operations. Maximum sharing of suffixes can be achieved by using a "hashed consing" technique
in which a new node is created only if it corresponds to a distinct new stack. See [1, 42].)
Making a queue, steque, or deque persistent is not so simple. One approach, which has the
advantage of giving a purely functional solution, is to represent such a data structure by a fixed
number of stacks so that each operation becomes a fixed number of stack operations. That is, we
seek a real-time simulation of a queue, steque, or deque by a fixed number of stacks. The problem
of giving a real-time simulation of a deque by a fixed number of stacks is closely related to an old
problem in Turing machine complexity, that of giving a real-time simulation of a (one-dimensional)
multihead tape unit by a fixed number of (one-dimensional) one-head tape units. The two problems
can be reduced to one another by noting that a deque can be simulated by a two-head tape unit,
and a one-head tape unit can be simulated by two stacks; thus the deque problem can be reduced
to the tape problem. Conversely, a k-head tape unit can be simulated by deques and two
stacks, and a stack can be simulated by a one-head tape; thus the tape problem can be reduced
to the deque problem. There are two gaps in these reductions. The first is that a deque element
can potentially be chosen from an infinite universe, whereas the universe of tape symbols is always
finite. This allows the possibility of solving the tape problem using some clever symbol encoding
that might not be applicable to the deque problem. But none of the known solutions to the tape
problem exploits this possibility; they all give solutions to the deque problem by the reduction
above. The second gap is that the reductions do not necessarily minimize the numbers of stacks or
one-head tapes in the simulation; if this is the goal, the deque or tape problem must be addressed
directly.
The first step toward solving the tape simulation problem was taken by Stoss [43], who produced
a linear-time simulation of a multihead tape by a fixed number of one-head tapes. Shortly thereafter,
Fisher, Meyer, and Rosenberg [17] gave a real-time simulation of a multihead tape by a fixed number
of one-head tapes. The latter simulation uses a tape-folding technique not directly related to the
method of Stoss. Later, Leong and Seiferas [32] gave a real-time, multihead-tape simulation using
fewer tapes by cleverly augmenting Stoss's idea. Their approach also works for multidimensional
tapes, which is apparently not true of the tape-folding idea.
Because of the reduction described above, the deque simulation problem had already been solved
(by two different methods!) by the time work on the problem began appearing in the data structure
and functional programming literature. Nevertheless, the latter work is important because it deals
with the deque simulation problem directly, which leads to a more efficient and conceptually simpler
solution. Although there are several works [5, 8, 19, 20, 21, 22, 23, 34, 39] dealing with the deque
simulation problem, they all describe essentially the same solution. This solution is based on two
key ideas, which mimic the ideas of Stoss and Leong and Seiferas.
The first idea is that a deque can be represented by a pair of stacks, one representing the
front part of the deque and the other representing the rear part. When one stack becomes empty
because of too many pop or eject operations, the deque, now all on one stack, is copied into two
stacks each containing half of the deque elements. This fifty-fifty split guarantees that such copying,
even though expensive, happens infrequently. A simple amortization argument using a potential
function equal to the absolute value of the difference in stack sizes shows that this gives a linear-time
simulation of a deque by a constant number of stacks: k deque operations starting from an
empty deque are simulated by O(k) stack operations. (See [44] for a discussion of amortization
and potential functions.) This simple idea is the essence of Stoss's tape simulation. The idea of
representing a queue by two stacks in this way appears in [5, 20, 22]; this representation of a deque
appears in [19, 21, 23, 39].
The second idea is to use incremental copying to convert this linear-time simulation into a real-time
simulation: as soon as the two stacks become sufficiently unbalanced, recopying to create two
balanced stacks begins. Because the recopying must proceed concurrently with deque operations,
which among other things causes the size of the deque to be a moving target, the details of this
simulation are a little complicated. Hood and Melville [22] first spelled out the details of this
method for the case of a queue; Hood's thesis [21] describes the simulation for a deque. See also
[19, 39]. Chuang and Goldberg [8] give a particularly nice description of the deque simulation.
Okasaki [34] gives a variation of this simulation that uses "memoization" to avoid some of the
explicit stack-to-stack copying; his solution gives persistence but is not strictly functional since
memoization is a side effect.
A completely different way to make a deque persistent is to apply the general mechanism of
Driscoll, et al. [14], but this solution, too, is not strictly functional, and the constant time bound
per deque operation is amortized, not worst-case.
Once catenation is added as an operation, the problem of making stacks or deques persistent
becomes much harder; all the methods mentioned above fail. Kosaraju has obtained a couple of
intriguing results that deserve mention, although they do not solve the problem we consider here.
First [30], he gave a real-time simulation of catenable deques by non-catenable deques. Unfor-
tunately, this solution does not support confluent persistence; in particular, Kosaraju explicitly
disallows self-catenation. His solution is also real-time only for a fixed number of deques; the time
per deque operation increases at least linearly with the number of deques. Second [31], he gave a
real-time, random-access implementation of catenable deques with the "find minimum" operation,
a problem discussed in Section 7. This solution is real-time for a variable number of deques, but it
does not support confluent persistence. Indeed, Kosaraju [31] states, "These ideas might be helpful
in making mindeques confluently persistent."
There are, however, some previous solutions to the problem of making catenable deques fully
persistent. A straightforward use of balanced trees gives a representation of persistent catenable
deques in which an operation on a deque or deques of total size n takes O(log n) time. Driscoll,
Sleator, and Tarjan [13] combined a tree representation with several additional ideas to obtain an
implementation of persistent catenable stacks in which the k th operation takes O(log log
Buchsbaum and Tarjan [4] used a recursive decomposition of trees to obtain two implementations of
persistent catenable deques. The first has a time bound of 2 O(log k) and the second a time bound
of O(log k) for the k th operation, where log k is the iterated logarithm, defined by log (1)
log 2 k; log (i) 1g. This work motivated
ours.
3 Recursive Slow-down
In this section we describe the key insight that led to our result. Although this insight is not explicit
in our ultimate construction and is not needed to understand it, the idea may be helpful in making
progress on other problems, and for that reason we offer it here.
The spark for our work was an observation concerning the recurrence that gives the time bounds
for the Buchsbaum-Tarjan data structures. This recurrence has the following form:
where c is a constant. An operation on a structure of size n takes a constant amount of time
plus a fixed number of operations on recursive substructures of size log n. In the first version of
the Buchsbaum-Tarjan structure, c is a fixed constant greater than one, and the recurrence gives
the time bound T n) . In the second version of the structure, c equals one, and the
recurrence gives the time bound T (n) = O(log n).
But suppose that we could design a structure in which the constant c were less than one. Then
the recurrence would give the bound T (n) = O(1). Indeed, the recurrence T
gives the bound T (n) = O(1) for any constant c ! 1, such as used
a similar observation to improve the time bound for selection in a min-heap from O(k2 log k ) to
O(k).) Thus we can obtain an O(1) time bound for operations on a data structure if each operation
requires O(1) time plus half an operation on a smaller recursive substructure. We can achieve the
same effect if our data structure requires only one operation on a recursive substructure for every
two operations on the top-level structure. We call this idea recursive slow-down.
The main new feature in our data structure is the mechanism for implementing recursive slow-
down. Stated abstractly, the basic problem is to allocate work cycles to the levels of a linear
recursion so that the top level gets half the cycles, the second level gets one quarter of the cycles,
the third level gets one eighth of the cycles, and so on. This is exactly what happens in binary
counting. Specifically, if we begin with zero and repeatedly add one in binary, each addition of one
causes a unique bit position to change from zero to one. In every second addition this position is
the one's bit, in every fourth addition it is the two's bit, in every eighth addition it is the four's
bit, and so on.
Of course, in binary counting, each addition of one can change many bits to zero. To obtain
real-time performance, this additional work must be avoided. One can do this by using a redundant
digital representation, in which numbers have more than one representation and a single digit
change is all that is needed to add one. Clancy and Knuth [9] used this idea in an implementation
of finger search trees. Descriptions of such redundant representations as well as other applications
can be found in [2, 9, 28]. The Clancy-Knuth method represents numbers in base two but using
three digits, 0,1, and 2. A redundant binary representation (RBR) of a non-negative number x is
a sequence of digits d n , d
Such a representation
is in general not unique. We call an RBR regular if for every j such that d there exists an
In other words, while scanning the digits from
most significant to least significant, after finding a 2 we must find a 0 before finding another 2 or
running out of digits. This implies in particular that d 0 6= 2.
To add 1 to a number x represented by a regular RBR, we first add 1 to d 0 . The result is an
RBR for x+ 1, but which may not be regular. We restore regularity by finding the least significant
digit d i which is not 1, and if d setting d
the RBR is already regular.)
It is straightforward to show that this method correctly adds 1, and it does so while changing
only a constant number of digits, thus avoiding explicit carry propagation.
Our work allocation mechanism for lists uses a three-state system, corresponding to the three
digits (0; 1; 2) of the Clancy-Knuth number representation. Instead of digits, we use colors. Each
level of the recursive data structure is green, yellow, or red, with the color based on the state of
the structure at that level. A red structure is bad but can be converted to a green structure at
the cost of degrading the structure one level deeper, from green to yellow or from yellow to red.
We maintain the invariant on the levels that any two red levels are separated by at least one green
level, ignoring intervening yellow levels. The green-yellow-red mechanism applied to an underlying
linear structure suffices to add constant-time catenation to stacks. To handle deques, we must
extend the mechanism to apply to an underlying tree structure. This involves adding another
color, orange. Whereas the green-yellow-red system is a very close analogue of the Clancy-Knuth
number representation, the extended system is more distantly related. We postpone a discussion
of this extension to Section 6, where it is used.
4 Deques without Catenation
In this section we present a real-time, purely functional implementation of deques without cate-
nation. This example illustrates our ideas in a simple setting, and provides an alternative to the
implementation based on a pair of incrementally copied stacks, which was described in Section 2.
In Section 5 we modify the structure to support stacks with catenation. (We add catenate as
an operation but remove eject.) Finally, in Section 6 we modify the structure to support all the
catenable deque operations. This last step involves extending the work allocation mechanism as
mentioned at the end of Section 3. Recall that the operations possible on a deque d are push(x,d),
pop(d), inject(x,d), and eject(d). Here and in subsequent sections we say that a data structure is
over a set A if it stores elements from A.
4.1 Representation
We represent a deque by a recursive structure that is built from bounded-size deques called buffers.
Each buffer can hold up to five elements. Buffers are of two kinds: prefixes and suffixes. A non-empty
deque d over a set A is represented by an ordered triple consisting of a prefix prefix(d) of
elements of A, a child deque child(d) whose elements are ordered pairs of elements of A, and a suffix
suffix(d) of elements of A. The order of elements within d is the one consistent with the orders
of all of its component parts. The child deque child(d), if non-empty, is represented in the same
way. Thus the structure is recursive and unwinds linearly. We define the descendants fchild i (d)g
of deque d in the standard way, namely child 0 child
if child i (d) is non-empty.
Observe that the elements of d are just elements of A, the elements of child(d) are pairs of
elements of A, the elements of child(child(d)) are pairs of pairs of elements of A, and so on. One
can think of each element of child i (d) as being a complete binary tree of depth 2 i, with elements
of A at its 2 i leaves. One can also think of the entire structure representing d as a stack (of d
and its descendants), each element of which is prefix-suffix pair. All the elements of d are stored
in the prefixes and suffixes at the various levels of this structure, grouped into binary trees of the
appropriate depths: level i contains the prefix and suffix of child i (d). See Figure 4.1.
Because of the pairing, we can bring two elements up to level i by doing one pop or eject at level
1. Similarly, we can move two elements down from level i by doing one push or inject at level
1. This two-for-one payoff gives the recursive slow-down that leads to real-time performance.
To obtain this real-time performance, we must guarantee that each top-level deque operation
requires changes to only a constant number of levels in the recursive structure. For this reason we
impose a regularity constraint on the structure. We assign each buffer, and each deque, a color,
either green, yellow, or red. A buffer is green if it has two or three elements, yellow if one or four,
and red if zero or five. Observe that we can add one element to or delete one element from a green
or yellow buffer without violating its size constraint: a green buffer stays green or becomes yellow,
a yellow buffer becomes green or red.
We order the colors red ! yellow ! green; red is bad, green is good. A "higher" buffer color
2 The depth of a complete binary tree is the number of edges on a root-to-leaf path.
Figure
4.1: Representation of a deque. Square brackets denote the deque and its descendant deques;
parentheses denote buffers. Curly brackets denote expansion of a deque into its component parts.
Numbers denote levels of deques. Triangles at level three denote pairs of pairs of pairs (equivalently,
complete binary trees of depth three).
indicates that more insertions or deletions on the buffer are possible before its size is outside the
allowed range. We define the color of a non-empty deque to be the minimum of the colors of its
prefix and suffix, unless its child and one of its buffers are empty, in which case the color of the
deque is the color of its nonempty buffer.
Our regularity constraint on a deque d is a constraint on the colors of the sequence of descendant
deques d, child(d), child We call d semi-regular if between any two red deques in this
sequence there is a green deque, ignoring intervening yellows. More formally, d is semi-regular if,
for any two red deques child i (d) and child j (d) with there is a k such that
child k (d) is green. We call d regular if d is semi-regular and if, in addition, the first non-yellow
deque (if any) in the sequence is green. Observe that if d is regular or semi-regular, then child(d),
and indeed child i (d) for i ? 0, is semi-regular. Furthermore if d is semi-regular and red, then
child(d) is regular.
Our strategy for obtaining real-time performance is to maintain the constraint that any top-level
deque is regular, except possibly in the middle of a deque operation, when the deque can
temporarily become semi-regular. A regular deque has a top level that is green or yellow, which
means that any deque operation can be performed by operating on the appropriate top-level buffer.
This may change the top level from green to yellow or from yellow to red. In either of these cases
the deque may no longer be regular but only semi-regular; it will be semi-regular if the topmost
non-yellow descendant deque is now red. We restore regularity by changing such a red deque to
green, in the process possibly changing its own child deque from green to yellow or from yellow to
red or green. Observe that such color changes, if we can effect them, restore regularity. This process
corresponds to addition of 1 in the redundant binary numbering system discussed in Section 3.
In the process of changing a red deque to green, we will not change the elements it contains or
their order; we merely move elements between its buffers and the buffers of its child. Thus, after
making such a change, we can obtain a top-level regular deque merely by restoring the levels on
top of the changed deque.
The topmost red deque may be arbitrarily deep in the recursive structure, since it can be
separated from the top level by many yellow deques. To achieve real-time performance, we need
constant-time access to the topmost red deque. For this reason we do not represent a deque in the
obvious way, as a stack of prefix-suffix pairs. Instead, we break this stack up into substacks. There
is one substack for the top-level deque and one for each non-yellow descendant deque not at the top
level. Each substack consists of a top-level or non-yellow deque and all consecutive yellow proper
descendant deques. We represent the entire deque by a stack of substacks of prefix-suffix pairs
using this partition into substacks. An equivalent pointer-based representation is to use a node
with four pointers for each non-empty descendant deque d. Two of the pointers are to the prefix
and suffix at the corresponding level. One pointer is to the node for the child deque if this deque
is non-empty and yellow. One pointer is to the node for the nearest non-yellow proper descendant
deque, if such a deque exists and d itself is non-yellow or top-level. See Figure 4.2.
Figure
A single deque operation will require access to at most the top three substacks, and to at most
the top two elements in any such substack. The color changes caused by a deque operation produce
only minor changes to the stack partition into substacks, changes that can be made in constant
time. In particular, changing the color of the top-level deque does not affect the partition into
Y
G
Y
R
Y
G
R
Figure
4.2: Pointer representation of stack of substacks structure. Horizontal lines denote buffers.
Letters indicate deque colors. Left pointers link elements within substacks; right pointers link tops
of substacks. Null pointers are denoted by ;.
substacks. Changing the topmost red deque to green and its child from yellow to non-yellow splits
one substack into its first element, now a new substack, and the rest. This is just a substack pop
operation. Changing the topmost red deque to green and its child from green to yellow merges a
singleton substack with the substack under it. This is just a substack push operation.
4.2 Deque Operations
All that remains is to describe the details of the buffer manipulations and verify that they produce
the claimed color changes. To perform a push or pop, we push or pop the appropriate element
onto or off the top-level prefix, unless this prefix and the child deque are empty, in which case we
do the same to the top-level suffix. Inject and eject are symmetric. Because the original deque is
regular, the top level is originally green or yellow, and any such operation can be performed without
overflowing or underflowing the buffer (unless we try to pop or eject from an already empty deque).
The top level may change from green to yellow, or from yellow to red, which may make the new
deque semi-regular.
We restore a semi-regular deque (that is not regular) to regular as follows. Let i be the topmost
red be the i th and st -level prefixes and the st and i th level
suffixes, respectively. Viewing elements from the perspective of level i, we call the elements of P i+1
and S i+1 pairs, since each is a pair of level-i elements. Note that if either P i+1 or S i+1 is empty,
then so is d i+1 , since level cannot be red. Apply the appropriate one of the following three
cases:
Two-Buffer Case: jP 2. If P i+1 is empty, pop a pair from S i+1 and inject it into
eject a pair from P i+1 and push it onto S i+1 . If jP eject two elements
from P i , pair them, and push the pair onto P i+1 . If jS i j - 4, pop two elements from S i , pair
them, and inject the pair into S i+1 . If jP a pair from P i+1 and inject its two elements
individually into P i . If jS eject a pair from S i+1 and push its two elements
is the bottom-most level and P i+1 and S i+1 are both now empty, eliminate level i + 1.
One-Buffer Case: jP 2. If level i is the bottom-most level,
create a new, empty level i + 1. If jS pop the pair from S i+1 and inject it into P i+1 . If
eject two elements from P i , pair them, and push the pair onto P i+1 . If jS i j - 4, pop two
elements from S i , pair them, and inject the pair into P i+1 . If jP a pair from P i+1 and
inject its two elements into P i . If jS eject a pair from P i+1 and push its two elements onto
No-Buffer Case: jP
contain 2 or 3 level-i elements, two of which are paired in P i+1 or S i+1 . Move all these elements to
exists.
Note: Even though each deque operation is only on one end of the deque, the regularization
procedure operates on both ends of the descendant deques concurrently.
Theorem 4.1 Given a regular deque, the method described above will perform a push, pop, inject,
or eject operation in O(1) time, resulting in a regular deque.
Proof. The only non-trivial part of the proof is to verify that the regularization procedure is
correct; it is then straightforward to verify that each deque operation is performed correctly and
that the time bound is O(1), given the stack-of-substacks representation.
If the two-buffer case occurs, both P i+1 and S i+1 are non-empty and level
yellow after the first two steps. (Level starts green or yellow by semi-regularity, and making
both P i+1 and S i+1 non-empty cannot make level red.) The remaining steps make level i
green and change the sizes of P i+1 and S i+1 by at most one each. The only situation in which
level green and end red is when jP initially
and jP finally. But in this case level i must be the bottom-most level, and
it is eliminated at the end of the case. Thus this case makes the color changes needed to restore
regularity.
If the one-buffer case occurs, then since level initially be red, it or level i must be
the bottom-most level. This case makes level i green and makes level or empty,
in which case it is eliminated. Thus this case, also, makes the color changes needed to restore
regularity.
If the no-buffer case occurs, P i+1 or S i+1 must contain a pair, because otherwise level
be empty, hence non-existent, and level i will be yellow if non-empty, which contradicts the fact
that level i is the topmost red level. Also at most one of P i and S i can contain an element. It
follows that this case, too, restores regularity. 2
The data structure described above can be simplified if only a subset of the four operations push,
pop, inject, eject is allowed. For example, if push is not allowed, then prefixes can be restricted to
be of size 0 to 3, with 0 being red, 1 yellow, and 2 or 3 green. Similarly, if eject is not allowed, then
suffixes can be restricted to be of size 0 to 3, with 0 or 1 being green, 2 yellow, and 3 red. Thus
we can represent a queue (inject and pop only) with all buffers of size at most 3. Alternatively, we
can represent a steque by a pair consisting of a stack and a queue. All pushes are onto the stack
and all injects into the queue. A pop is from the stack unless the stack is empty, in which case it
is from the queue.
5 Real-Time Catenation
Our next goal is a deque structure that supports fast catenation. Since catenable steques (deques
without eject) are easier to implement than catenable deques, we discuss catenable steques here, and
delay our discussion of a structure that supports the full set of operations to Section 6. Throughout
the rest of the paper we refer to a catenable steque simply as a steque.
5.1 Representation
Our representation of steques is like the structure of Section 4, with two major differences in the
component parts. As in Section 4, we use buffers of two different kinds, prefixes and suffixes. Unlike
Section 4, each buffer is a noncatenable steque with no upper bound on its size. Such a steque
can be implemented using either the method of Section 4 or the stack-reversing method sketched
in Section 2. As a possible efficiency enhancement, we can store with each buffer its size. although
this is not in fact necessary to obtain constant-time operations. We require each prefix to contain
at least two elements. There is no lower bound on the size of a suffix, and indeed a suffix can be
empty.
The second difference is in the components of the pairs stored in the child steque. We define a
pair over a set A recursively as follows: a pair over A consists of a prefix of elements of A and a
possibly empty steque of pairs over A. We represent a nonempty steque s over A either as a suffix
suffix(s) of elements of A, or as a triple consisting of a prefix prefix(s) of elements A, a child steque
child(s) of pairs over A, and a suffix suffix(s) of elements of A. The child steque, if non-empty, is
represented in the same way, as is each non-empty steque in one of the pairs in child(s). The order
of elements within a steque is the one consistent with the order in each of the component paths.
Figure
5.1.
Figure
5.1: Partial expansion of the representation of a steque. Square brackets denote catenable
steques; horizontal lines denote buffers. Curly brackets denote expansion of a steque into its
component parts. Arrows denote membership. Circles denote elements of the base set. Numbers
denote levels of steques.
Figure
This structure is doubly recursive; each steque in the structure is either a top-level steque, the child
of another steque, or the second component of a pair that is stored in another steque. We define
the level of a steque in this structure as follows. A top-level steque has level 0. A steque has level
it is the child of a level-i steque or it is in a pair that is stored in a level-(i
Observe that every level-i steque has the same type of elements. Namely, the elements of level-0
steques are elements of A, the elements of level-1 steques are pairs over A, the elements of level-2
steques are pairs over pairs over A, and so on. Steques to be catenated need to have the same level;
otherwise, their elements have different types.
Because of the extra kind of recursion as compared to the structure of Section 4, there is not
just one sequence of descendent steques, but many: the top-level steque, and each steque stored in
a pair in the structure, begins such a sequence, consisting of a steque s, its child, its grandchild,
and so on. Among these descendants, the only one that can be represented by a suffix only (instead
of a prefix, child, suffix triple) is the last one.
We may order the steque operations in terms of their implementation complexity as follows:
push or inject is simplest, catenate next-simplest, and pop most-complicated. Each push or inject
is a simple operation on a single buffer, because buffers can grow arbitrarily large, which means
that overflow is not a problem. We can perform a catenate operation as just a few push or inject
operations, because of the extra kind of recursion. A pop is the most complicated operation. It can
require a catenate, and it may also threaten buffer underflow, which we prevent by a mechanism
like that used in Section 4.
Each prefix has a color, red if the prefix contains two elements, yellow if three, and green if four
or more. Each nonempty steque in the structure also has a color, which is the color of its prefix if
it has one, and otherwise green. We call a steque s semi-regular if, between any pair of red steques
in a descendent sequence within s, there is a green steque, ignoring intervening yellows. We call a
steque s regular if it is semi-regular and if, in addition, the first non-yellow steque in the sequence
s, child 1 (s), child any, is green. As in Section 4, we maintain the invariant that any
top-level steque is regular, except possibly in the middle of a steque operation, when it may be
temporarily semi-regular. Observe that if s is regular, then child(s) is semi-regular, and that if s is
semi-regular, a steque having a green prefix and s as its child steque is regular.
Our representation of steques corresponds to that in Section 4. Namely, we represent each
descendent sequence as a stack of substacks by breaking the descendent sequence into subsequences,
each beginning with the first steque or a non-yellow steque and containing all consecutively following
yellow steques. Each element of a substack is a pair consisting of the prefix and suffix of the
corresponding steque (with a null indicator for a nonexistent prefix). Each element of a prefix or
suffix is an element of the base set if the prefix or suffix is at level 0, or a pair of the appropriate
type if the prefix or suffix is deeper in the structure. See Figure 5.2.
Y
R
Y
G
R
Y
Y
Y
G
Y
Figure
5.2: Pointer representation of the substack decomposition of part of the partially expanded
steque in Figure 5.1. The sequences of descendants are shown. Letters denote steque colors. Left
pointers link the elements within substacks; right pointers link the tops of substacks. Null pointers
are denoted by ;.
5.2. Steque Operations
As noted above, push and inject operations are the simplest steque operations to implement: each
changes only a single buffer, increasing it in size by one. Specifically, to inject an element x into a
steque s, we inject x into suffix(s). To push an element x onto a steque s, we push x onto prefix(s)
unless s has no prefix, in which case we push x onto suffix(s). A push may change the color of
the top-level steque from red to yellow or from yellow to green, but this only helps the regularity
constraint and it does not change the substack decomposition.
A catenate operation is somewhat more complicated but consists of only a few push and inject
operations. Specifically, to form the catenation s 3 of two steques s 1 and s 2 , we apply the appropriate
one of the following three cases:
Case 1: s 1 is a triple. If suffix(s 1 ) contains at least two elements, inject the pair (suffix(s 1 ); ;)
into child(s 1 ). (This converts suffix(s 1 ) into a prefix.) Otherwise, if suffix(s 1 ) contains one element,
push this element onto s 2 . If s 2 is a triple, inject the pair (prefix(s 2
s 3 be the triple (prefix(s 1
Case 2: s 1 is a suffix only and s 2 is a triple. If jsuffix(s 1 )j - 4, push the pair (prefix(s 2 ); ;) onto
let the result s 3 be the triple (suffix(s 1
into a green prefix.) Otherwise, pop the at most three elements on suffix(s 1 ), push them in the
opposite order onto prefix(s 2 ), and let s 3 be (prefix(s 2
Case 3: Both s 1 and s 2 are suffixes only. If jsuffix(s 1 )j - 4, let s 3 be (suffix(s 1 ); ;; suffix(s 2 )).
(This makes suffix(s 1 ) into a green prefix.) Otherwise, pop the at most three elements on suffix(s 1 ),
push them in the opposite order onto suffix(s 2 ), and let s 3 be suffix(s 2 ).
Lemma 5.1 If s 1 and s 2 are semi-regular, then s 3 is semi-regular. If in addition s 1 is regular,
then s 3 is regular.
Proof. In Case 3, the only steque in s 3 is the top-level one, which is green. Thus s 3 is regular.
In Case 2, the push onto child(s 2 ), if it happens, preserves the semi-regularity of child(s 2 ), and the
prefix of the result steque s 3 is green. Thus s 3 is regular. In Case 1, both child(s 1 ) and child(s 2 )
are semi-regular. The injections into child(s 1 ) preserve its semi-regularity. Steque s 1 has the same
prefix as s 1 and the same child steque as s 1 , save possibly for one or two injects. Thus s 3 is
semi-regular if s 1 is, and is regular if s 1 is. 2
A pop is the most complicated steque operation. To pop a steque that is a suffix only, we merely
pop the suffix. To pop a steque that is a triple, we pop the prefix. This may result in a steque
that is no longer regular, but only semi-regular. We restore regularity by modifying the nearest
red descendant steque, say s 1 , of the top-level steque, as follows. If child(s 1 ) is empty, pop the
two elements on prefix(s 1 ), push them in the opposite order onto suffix(s 1 ), and represent s 1 by its
suffix only. Otherwise, pop a pair, say (p; pop the two elements on prefix(s 1 )
and push them in the opposite order onto p, catenate s 2 and child(s 1 ) to form s 3 , and replace s 1
by the triple (p; s 3
Lemma 5.2 The restoration method described above converts a semi-regular steque s to regular.
Thus the implementaiton of pop is correct.
Proof. Let s 1 be the nearest red descendant steque of s. If child(s 1 ) is empty, s 1 is replaced by
a green steque with no child, and the result is a regular steque. Suppose child(s 1 ) is non-empty.
before the pop is regular, because it is semi-regular since s 1 is semi-regular and since
s 1 is red the nearest non-yellow descendant of child(s 1 ) must be green. Hence child(s 1 ) is at least
semi-regular after a pop. The triple (p; s 3 replacing s 1 has p green and s 3 semi-regular,
which means it is regular. 2
Theorem 5.1 A push, pop, or inject on a regular steque takes O(1) time and results in a regular
steque. A catenation of two regular steques takes O(1) time and results in a regular steque.
Proof. The O(1) time bound per steque operation is obvious if the stack of substacks representation
is used. Regularity is obvious for push and inject, is true for catenate by Lemma 5.1, and for pop
by Lemma 5.2. 2
For an alternative way to build real-time catenable steques using noncatenable stacks as buffers,
see [25].
6 Catenable Deques
Finally, we extend the ideas presented in the previous two sections to obtain a data structure that
supports the full set of deque operations, namely push, pop, inject, eject, and catenate, each in O(1)
time. We omit certain definitions that are obvious extensions of those in previous sections.
A common feature of the two data structures presented so far is an underlying linear skeleton
(the sequence of descendants). Our structure for catenable deques replaces this linear skeleton by
a binary-tree skeleton. This seems to be required to efficiently handle both pop and eject. The
branching skeleton in turn requires a change in the work-allocation mechanism, which must funnel
computation cycles to all branches of the tree. We add one color, orange, to the color scheme,
and replace the two-beat rhythm of the green-yellow-red mechanism by a three-beat rhythm. We
obtain an O(1) time bound per deque operation essentially because 2=3 ! 1; the "2" corresponds
to the branching factor of the tree structure, and the "3" corresponds to the rhythm of the work
cycle. The connection to redundant numbering systems is much looser than for the green-yellow-
red scheme used in Sections 4 and 5. Nevertheless, we are able to show directly that the extended
mechanism solves our problem.
6.1 Representation
Our representation of deques uses two kinds of buffers: prefixes and suffixes. Each buffer is a
non-catenable deque. We can implement the buffers either as described in Section 4 or by using
the incremental stack-reversing method outlined in Section 2. Henceforth by "deque" we mean a
catenable deque unless we explicitly state otherwise. As in Section 5, we can optionally store with
each buffer its size, which may provide a constant-factor speedup.
We define a triple over a set A recursively as a prefix of elements of A, a possibly empty deque
of triples over A, and a suffix of elements of A. Each triple in the deque we call a stored triple. We
represent a non-empty deque d over A either by one triple over A, called an only triple, or by an
ordered pair of triples over A, the left triple and the right triple. The deques within each triple are
represented recursively in the same way. The order of elements within a deque is the one consistent
with the order in each of the component parts.
We define a parent-child relation on the triples as follows. If deque, suffix) is a
triple with deque 6= ;, the children of t are the one or two triples that make up deque. We define
ancestors and descendants in the standard way. Under this relation, the triples group into trees,
each of whose nodes is unary or binary. Each top-level triple and each stored triple is the root of
such a tree, and a deque is represented by the one or two such trees rooted at the top-level triples.
Figure
6.1.
Figure
6.1: Partial expansion of the representation of a catenable deque. Conventions are as in
Figure
5.1, with two triples comprising a deque separated by a comma.
Figure
6.1]
There are four different kinds of triples: stored triples, only triples, left triples, and right triples.
We impose size constraints on the buffers of a triple depending upon what kind it is. If
is a stored triple, we require that both p and s contain at least three elements unless d and one
of the buffers is empty, in which case the other buffer must contain at least three elements. If t is
an only triple, we require that both p and s contain at least five elements, unless d and one of the
buffers is empty, in which case the other buffer can contain any non-zero number of elements. If t
is a left triple, we require that p contain at least five elements and s exactly two. Symmetrically, if
t is a right triple, we require that s contain at least five elements and p exactly two.
We assign colors to the triples based on their types and their buffer sizes, as follows. Let
s) be a triple. If t is a stored triple or if d = ;; t is green. If t is a left triple and d 6= ;; t
is green if p contains at least eight elements, yellow if p contains seven, orange if six, and red if five.
Symmetrically, if t is a right triple and d 6= ;; t is green if s contains at least eight elements, yellow
if seven, orange if six, and red if five. If t is an only triple with d 6= ;; t is green if both p and s
contain at least eight elements, yellow if one contains seven and the other at least seven, orange if
one contains six and the other at least six, and red if one contains five and the other at least five.
The triples are grouped into trees by the parent-child relation. We partition these trees into
paths as follows. Each yellow or orange triple has a preferred child, which is its left child or only
child if the triple is yellow and its right child or only child if the triple is orange. The preferred
children define preferred paths, each starting at a triple that is not a preferred child and passing
through successive preferred children until reaching a triple without a preferred child. Thus each
preferred path consists of a sequence of zero or more yellow or orange triples followed by a green
or red triple. (Every triple with no children is green.) We assign each preferred path a color, green
or red, according to the color of its last triple.
We impose a regularity constraint on the structure, like those in Sections 4 and 5 but a little
more complicated. We call a deque semi-regular if both of the following conditions hold:
(1) Every preferred path that starts at a child of a red triple is a green path.
(2) Every preferred path that starts at a non-preferred child of an orange triple is a green path.
This definition implies that if a deque is semi-regular, then all the deques in its constituent
triples are semi-regular. We call a deque regular if it is semi-regular and if, in addition, each
preferred path that starts at a top-level triple (one of the one or two representing the entire deque)
is a green path. We maintain the invariant that any top-level deque is regular, except possibly
in the middle of a deque operation, when it may temporarily be only semi-regular. Note that an
empty deque is regular.
We need a representation of the trees of triples that allows us to shortcut preferred paths. To
this end, we introduce the notions of an adopted child and its adoptive parent. Every green or red
triple that is on a preferred path of at least three triples is an adopted child of the first triple on this
path, which is its adoptive parent. That is, there is an adoptive parent-adopted child relationship
between the first and last triples on each preferred path containing at least three triples.
We define the compressed forest by the parent-child relation on triples, except that each adopted
child is a child of its adoptive parent instead of its natural parent. In the compressed forest, each
triple has at most three children, one of which may be adopted. We represent a deque by its
compressed forest, with a node for each triple containing the prefix and suffix of the triple and
pointers to the nodes representing its child triples. See Figure 6.2.
Figure
The operations that we describe in the next section rely on the following property of the compressed
forest representation. Given the node of a triple we can extract in constant
time a pointer to a compressed forest representation for d when t is a top-level triple, a stored
triple, or the color of t is either red or green.
6.2 Deque Operations
The simplest deque operations are push and inject. Next is catenate, which may require a push or
an inject or both. The most complicated operations are pop and eject, which can violate regularity
and may force a repair deep in the forest of triples (but shallow in the compressed forest).
We begin by describing push; inject is symmetric. Let d be a deque onto which we wish to push
an element. If d is empty, we create a new triple t to represent the new deque, with one nonempty
buffer containing the pushed element. If d is nonempty, let its left triple or
its only triple. If p 1 is nonempty, we push the new element onto otherwise, we push the new
element onto s 1 .
Y
O
G
Y
R
O
Y
Y
G
G
O
Figure
6.2: Top-level trees in the compressed forest representation of a deque. Letters denote
triples of the corresponding colors. Dashed arrows denote adoptive-parent, adoptive-child relationships
that replace the natural parent-child relationships marked by hatched arrows. The complete
compressed forest representation (not shown) would include the buffers of the triples and the lower-level
compressed trees rooted at the stored triples.
Lemma 6.1 A push onto a semi-regular deque produces a semi-regular deque; a push onto a regular
deque produces a regular deque.
Proof. If the push does not change the color of t, the lemma is immediate. If the push does
change the color of t, it must be from yellow to green, from orange to yellow, or from red to orange.
(Red-to-orange can only happen if the original deque is semi-regular but not regular.) The yellow-
to-green case obviously preserves both semi-regularity and regularity. In the orange-to-yellow case,
let u be the non-preferred child of t before the push if t has a non-preferred child. If u exists,
semi-regularity implies that the preferred path containing u is a green path. The push adds t to
the front of this path. This means that the push preserves both semi-regularity and regularity.
If u does not exist, then the push does not change any of the preferred paths but only changes t
from orange to yellow. In this case also the push preserves both semi-regularity and regularity. In
the red-to-orange case, before the push every child of t starts a preferred path that is green, which
means that after the push the non-preferred child of t, if it exists, starts a preferred path that is
green. Thus the push preserves semi-regularity. 2
Note that the only effect a push has on the preferred path decomposition is to add t to or
delete t from the front of a preferred path (or both). This means that the compressed forest can
be updated in O(1) time during a push.
Next, we describe catenate. Let d and e be the two deques to be catenated. Assume both are
nonempty; otherwise the catenate is trivial. To catenate d and e, we apply the appropriate one of
the following four cases:
Case 1. All the buffers in the two, three, or four top-level triples of d and e are nonempty. The
new deque will consist of two triples t and u, with t formed from the top-level triple or triples of d,
and u formed from the top-level triple or triples of e. There are four subcases in the formation of t.
Subcase 1a. Deque d consists of two triples
(each containing exactly two elements) into a single buffer p 3 .
Eject the last two elements from s 2 and add them to a new buffer s 3
2 be the rest of
Inject (p 3
Subcase 1b. Deque d consists of two triples Inject
the elements in s 1 and p 2 into p 1 to form p 0
1 . Replace the representation of d by the only
triple (p 0
apply Subcase 1c or 1d as appropriate.
Subcase 1c. Deque d consists of an only triple t Eject the
last two elements from s 1 and add them to a new buffer s 2 . Let the remainder of s 1 be s 0
1 .
Form a new triple (;; ;; s 0
inject it into d 1 to form d 0
Subcase 1d. Deque d consists of an only triple t contains at most
eight elements, move all but the last two elements of s 1 to p 1 to form p 0
let the remaining
two elements of s 1 form s 0
contains more than eight
elements), move the first three elements on s 1 into p 1 to form p 0
move the last two elements
on s 1 into a new buffer s 2 , and let the remainder of s 1 be s 0
1 . Push the triple (;; ;; s 0
an empty deque to form the deque d 2 . Let
Operate symmetrically on e to form u.
Case 2. Deque d consists of an only triple t only one nonempty buffer, and all
the buffers in the top-level triple or triples of e are nonempty. Let t be the left or
only triple of e. We combine t 1 and t 2 to form a new triple t, which is the left or only triple of the
new deque; the right triple of e, if it exists, is the right triple of the new deque. To form t, let p 3
be the nonempty one of p 1 and s 1 . If p 3 contains less than eight elements, push all these elements
Otherwise, form a triple (p 2 ; ;; ;), push it onto d 2 to
Case 3. Deque e consists of an only triple with only one nonempty buffer, and all the buffers in
the top-level triple or triples of d are nonempty. This case is symmetric to Case 2.
Case 4. Deques d and e each consist of an only triple with a single nonempty buffer. Let p be
the nonempty buffer of d and s the nonempty buffer of e. If either p or s contains fewer than eight
elements, combine them into a single buffer b and let
Lemma 6.2 A catenation of two semi-regular deques produces a semi-regular deque. A catenation
of two regular deques produces a regular deque.
Proof. Consider Case 1. We shall show that, in each subcase, triple t and its descendants satisfy
the semi-regularity or regularity constraints as appropriate. The symmetric argument applies to u,
which gives the lemma for Case 1.
In Subcase 1d, triple t is green and either has a green child and no grandchildren or no child at
all. In either case t satisfies the regularity constraints. Consider Subcase 1c. Deque d 0
1 is formed
from a semi-regular deque d 1 by an injection and hence is semi-regular by Lemma 6.1. The color
of triple
is at least as good as the color of triple t since the color of t
depends only on the size of p 1 , whereas the color of t 1 depends on the minimum of the sizes of p 1
and s 1 . We must consider several cases, depending on the color of t 1 and on whether we are trying
to verify regularity or only semi-regularity. If t 1 is green, t and its descendants satisfy the regularity
constraints. If t 1 is red, the semi-regularity of d implies that d 1 and hence d 0
1 is regular, and t and
its descendants satisfy the semi-regularity constraints. If t 1 is orange and d is regular, then d 1 and
hence d 0
1 must be regular, and t and its descendants satisfy the regularity constraints. If t 1 is orange
and d is only semi-regular, then the non-preferred child of t 1 , if it exists, starts a green path. The
corresponding non-preferred child of t also starts a green path, by an argument like that in Lemma
6.1. This means that t and its descendants satisfy the semi-regularity constraints. If t 1 is yellow,
the semi-regularity of d 0
1 implies that t and its descendants satisfy the semi-regularity constraints.
Finally, if t 1 is yellow and d is regular, then the preferred child of t 1 is on a green path, as is the
corresponding child of t, again by an argument like that in Lemma 6.1. Thus t and its descendants
satisfy the regularity constraints.
Subcase 1b creates a one-triple representation of d that is semi-regular if the original representation
is and regular if the original one is. Subcase 1b is then followed by an application of 1c
or 1d as appropriate. In this case, too, triple t and its descendants satisfy the semi-regularity or
regularity constraints as appropriate.
The last subcase is Subcase 1a. As in Case 1c, the argument depends on the color of t
whether we are trying to verify regularity or semi-regularity. In this case, t 1 and
exactly the same color. Deque d 0
1 is semi-regular by Lemma 1, since d 1
and d 2 are semi-regular. The remainder of the argument is exactly as in Subcase 1c.
Consider Case 2. If p 3 contains less than eight elements, then t is formed by doing up to seven
pushes onto t 2 , so t satisfies regularity or semi-regularity by Lemma 6.1. Otherwise, deque d 0
2 is
formed from deque d 2 by doing a push, and triple t is either green or has the same color as triple
. The remainder of the argument is exactly as in Subcase 1c.
Case 3 is symmetric to Case 2. Case 4 obviously preserves both semi-regularity and regularity.A catenate changes the colors and compositions of triples in only a constant number of levels at
the top of the compressed forest structure. Hence this structure can be updated in constant time
during a catenate.
We come finally to the last two operations, pop and eject. We shall describe pop; eject is
symmetric. A pop consists of two parts. The first removes the element to be popped and the
second repairs the damage to regularity caused by this removal. Let t be the left or only triple of
the deque d to be popped. The first part of the pop consists of popping the prefix of t, or popping
the suffix if the prefix is empty, and replacing t in d by the triple t 0 resulting from this pop, forming
d 0 . As we shall see below, d 0 may not be regular but only semi-regular, because the preferred path
starting at t 0 may be red. In this case let u be the red triple at the end of this preferred path. Using
the compressed forest representation, we can access u in constant time. The second part of the pop
replaces u and its descendants by a tree of triples representing the same elements but which has a
green root v and satisfies the regularity constraints. This produces a regular representation of d 0
and finishes the pop.
To repair apply the appropriate one of the following cases. Since u is red,
Case 1. Triple u is a left triple. Pop the first triple (p
1 be the rest of d 1 .
Case 1a. Both p 2 and s 2 are nonempty. Push (;; ;; s 2 ) onto d 0
1 . Push the elements
on
2 . Catenate deques d 2 and d 00
Case 1b. One of p 2 and s 2 is empty. Combine into a single buffer p 3 . Let
Case 2. Triple u is an only triple. Apply the appropriate one of the following three cases.
Case 2a. Suffix s 1 contains at least eight elements. Proceed as in Case 1, obtaining
containing at least eight elements.
Case 2b. Prefix p 1 contains at least eight elements. Proceed symmetrically to Case 1,
obtaining containing at least eight elements.
Case 2c. Both p 1 and s 1 contain at most seven elements. Pop the first triple (p
from d 1 (without any repair); let d 0
1 be the rest of d 1 . If d 0
Otherwise, eject the last triple
1 (without any repair); let d 00
1 be the rest of d 0
1 . If one of p 2 and s 2 is empty,
combine into a single buffer p 4 and let d
d 00
push the elements on p 1 onto p 2 , forming p 4 ; and catenate d 2 and d 000
1 to form
d 4 . Symmetrically, if one of p 3 and s 3 is empty, combine p 3 ; s 3 , and s 1 into a single buffer s 4 ,
and let d Otherwise, inject (p 3 ; ;; ;) into d 4 , forming d 0
inject the elements on s 1 into
4 and d 3 to form d 5 . Let
Lemma 6.3 Removing the first element (from the first buffer) in a regular deque produces a semi-regular
deque whose only violation of the regularity constraint is that the preferred path containing
the left or only top-level triple may be red. Removing the first and last elements (from the first and
last buffers, respectively) in a regular deque produces a semi-regular deque.
Proof. Let d be a regular deque, and let its left or only triple. Let t 0 be formed
from t by popping p 1 , and let d 0 be formed from d by replacing t by t 0 . If t is green, yellow, or
orange (t cannot be red by regularity), then t 0 can be yellow, orange, or red, respectively. (One of
these transitions will occur unless both t and t 0 are green, in which case d 0 is regular since d is.) In
each case it is easy to verify that the regularity of d implies that triple t 0 satisfies the appropriate
semi-regularity constraint; so do all other triples since their colors don't change. The only possible
violation of regularity is that the preferred path containing t 0 may be red. An analogous argument
shows that if the last element of d 0 is removed to form d 00 then d 00 will still be semi-regular: if t is
the only triple of d, the two removals can degrade its color by only one color; if t is a left triple, an
argument symmetric to that above applies to its sibling. 2
Lemma 6.4 Popping a regular deque produces a regular deque.
Proof. Let d be the deque to be popped, and let d 0 be the deque formed by removing the first
element from the first buffer of d. Let t 0 be the left or only triple of d 0 . By Lemma 6.3, d 0 is
semi-regular, and the only violation of regularity is that the preferred path containing t 0 may be
red. If this preferred path is green, then d 0 is regular, the pop is finished, and the lemma is true.
Suppose, on the other hand, that this preferred path is red. Let be the red triple
on this path. Since d 0 is semi-regular and u is red, d 1 must be regular. We claim that the repair
described above in Cases 1 and 2 replaces u and its descendants by a tree of triples with a green
root satisfying the semi-regularity constraints, which implies that the deque d 00 resulting from the
repair is regular, thus giving the lemma.
Consider Case 1 above. Since d 1 is regular, the deque d 0
1 formed from d 1 by popping the triple
In Case 1a, the push onto d 0
1 to form d 00
leaves d 00
by Lemma 6.1. Deque d 2 is semi-regular since d 1 is regular, and by Lemma 6.2 the deque
d 3 formed by catenating d 2 and d 00
1 is semi-regular. The triple
This gives
the claim. In Case 1b, the triple
1 is semi-regular, again giving the
claim.
Consider Case 2 above. The same argument as in Case 1 verifies the claim in Cases 2a and 2b.
In Case 2c, if d 0
is green and d 2 is semi-regular, which gives the claim. In Case 2c, d 00
1 is
semi-regular by Lemma 6.3, deque d 5 is semi-regular by appropriate applications of Lemmas 6.1
and 6.2, and v is green. Again the claim is true. 2
As with the other operations, a pop changes only a constant number of levels at the top of the
compressed forest and hence can be performed in constant time.
Theorem 6.1 Each of the deque operations takes O(1) time and preserves regularity.
Proof. It is straightforward to verify that the compressed forest representation allows each of
the deque operations to be performed as described in O(1) time. Lemmas 6.1, 6.2, and 6.4 give
preservation of regularity. 2
The deque representation we have presented is a hybrid of two alternative structures described
in [25], one based on pairs and quadruples and the other, suggested by Okasaki [34], based on triples
and quintuples. The present structure offers some conceptual simplifications over these alternatives.
The buffer size constraints in our representation can be reduced slightly, at the cost of making the
structure less symmetric. For example, the lower bounds on the suffix sizes of right triples and only
triples can be reduced by one, while modifying the definition of colors appropriately.
7 Further Results and Open Problems
We conclude in this section with some additional results and open problems. We begin with two
extensions of our structures, then mention some recent work, and finally give some open problems.
If the set A of elements to be stored in a deque has a total order, we can extend all the structures
described here to support an additional heap order based on the order on A. Specifically, we can
support the additional operation of finding the minimum element in a deque (but not deleting
it). Each operation remains constant-time, and the implementation remains purely functional. We
merely have to store with each buffer, each deque, and each pair the minimum element contained
in it. For related work see [3, 4, 19, 31].
We can also support a flip operation on deques, for each of the structures in Sections 4 and 6. A
flip operation reverses the linear order of the elements in the deque; the ith from the front becomes
the ith from the back and vice-versa. For the noncatenable deques of Section 4, we implement flip
by maintaining a reversal bit that is flipped by a flip operation. If the reversal bit is set, a push
becomes an inject, a pop becomes an eject, an inject becomes a push, and an eject becomes a pop.
To support catenation as well as flip requires a little more work. We need to symmetrize
the structure and add reversal bits at all levels. The only non-symmetry in the structure is in
the definition of preferred children: the preferred child of a yellow triple is its left child and the
preferred child of an orange triple is its right child. Flipping exchanges left and right, but we
do not want this operation to change preferred children; we want the partition of the compressed
forest into preferred paths to be unaffected by a flip. Thus when we create a brand-new triple we
designate its current left child to be its preferred child if it is yellow and its current right child to
be the preferred child if it is orange. When a triple changes from orange to yellow or yellow to
orange, we switch its preferred child, irrespective of current left and right.
To handle flipping, we add a reversal bit for every deque and every buffer in the structure. A
reversal bit set to 1 means that the entire deque or buffer is flipped. Reversal bits are cumulative
along paths of descendants in the compressed forest: for a given deque or buffer, it is reversed if an
odd number of its ancestors (including itself) have reversal bits set to 1. To flip an entire deque, we
flip its reversal bit. Whenever doing a deque operation, we push reversal bits down in the structure
so that each deque actually being manipulated is not reversed; for reversed buffers, push and inject,
and pop and eject, switch roles. The details are straightforward.
Now we turn to recent related work. In work independent of ours, Okasaki [33, 35] has devised
a confluently persistent implementation of catenable stacks (or steques). His implementation is not
real-time but gives constant amortized time bounds per operation. It is also not purely functional,
but uses memoization. Okasaki uses rooted trees to represent the stacks. Elements are popped
using a memoized version of the path reversal technique previously used in a data structure for
the disjoint set union problem [45]. Though Okasaki's solution is neither real-time nor purely
functional, it is simpler than ours. Extending Okasaki's method to the case of deques is an open
problem.
After seeing an early version of our work [27], Okasaki [35, 36] observed that if amortized time
bounds suffice and memoization is allowed, then all of our data structures can be considerably sim-
plified. The idea is to perform fixes in a lazy fashion, using memoization to record the results. This
avoids the need to maintain the "stack of stacks" structures in our representations, and also allows
the buffers to be shorter. Okasaki called the resulting general method "implicit recursive slow-
down". He argues that the standard techniques of amortized analysis [44] do not suffice in this case
because of the need to deal with persistence. His idea is in fact much more general than recursive
slow-down, however, and the standard techniques [44] do indeed suffice for an analysis. Working
with Okasaki, we have devised even simpler versions of our structures that need only constant-size
buffers and take O(1) amortized time per deque operation, using a replacement operation that
generalizes memoization [26].
Finally, we mention some open problems. As noted above, one is to extend Okasaki's path
reversal technique to deques. A second one is to modify the structure of Section 6 to use buffers
of bounded size. We know how to do this for the case of stacks, but the double-ended case has
unresolved technicalities. Of course, one solution is to plug the structure of Section 4 in-line into
the structure of Section 6 and simplify to the extent possible. But a more direct approach may
well work and lead to a simpler solution. Another open problem is to devise a version of the
structure in Section 6 that uses only one subdeque instead of two, thus leading to a linear recursive
structure. A final open problem is to devise a purely functional implementation of finger search
trees (random-access lists) with constant-time catenation. Our best solution to this problem has
O(log log n) catenation time [28].
Acknowledgements
We thank Adam Buchsbaum, David Wagner, Ian Munro, and Chris Okasaki for their vital contributions
to this paper. Adam Buchsbaum engaged in extensive and fruitful discussions concerning
our ideas. David Wagner suggested the idea of the color invariant as an alternative to the explicit
use of binary counting as a work allocation mechanism. Ian Munro, after seeing a presentation of
our ideas, pointed out the connection of the color invariant to the redundant binary representation
of [9]. Chris Okasaki provided valuable comments on drafts of our work. We also thank the referees
for their insightful and valuable suggestions.
--R
Anatomy of LISP.
Data structural bootstrapping
Confluently persistant deques via data structural bootstrapping.
An efficient functional implementation of FIFO queues.
How to search in history.
Fractional cascading: I.
A programming and problem-solving seminar
Fully persistent arrays.
Efficient uses of the past.
Fully persistent lists with catenation.
Making data structures persistent.
The theory and practice of first-class prompts
Abstract continuations: A mathematical semantics for handling full functional jumps.
Optimal selection in a min-heap
Deques with heap order.
The Science of Programming.
The efficient implementation of very-high-level programming language constructs
A symmetric set of efficient list operations.
Stores and partial continuations as first-class objects in a language and its environment
Purely Functional Lists.
Simple confluently persistent catenable lists (extended abstract).
Persistent lists with catenation via recursive slow-down
Purely functional representations of catenable sorted lists.
Fundamental Algorithms
An optimal RAM implementation of catenable min double-ended queues
New real-time simulations of multihead tape units
persistence: lists with catenation via lazy linking.
Simple and efficient purely functional queues and deques.
Purely Functional Data Structures.
Catenable double-ended queues
Searching in the past
Searching in the past
Persistent Data Structures.
Planar point location using persistent search trees.
Control delimiters and their hierarchies.
An example of hierarchical design and proof.
Amortized computational complexity.
Worst case analysis of set union algorithms.
--TR
Worst-case Analysis of Set Union Algorithms
How to search history
Planar point location using persistent search trees
Searching and sorting similar lists
Deques with heap order
Abstract continuations: a mathematical semantics for handling full jumps
Making data structures persistent
Stores and partial continuations as first-class objects in a language and its environment
The theory and practice of first-class prompts
Control delimiters and their hierarchies
Real-time deques, multihead Turing machines, and purely functional programming
An optimal algorithm for selection in a min-heap
Fully persistent lists with catenation
Confluently persistent deques via data-structural bootstrapping
Data-Structural Bootstrapping, Linear Path Compression, and Catenable Heap-Ordered Double-Ended Queues
Persistent lists with catenation via recursive slow-down
Purely functional representations of catenable sorted lists
Catenable double-ended queues
Purely functional lists
Purely functional data structures
Worst-case efficient priority queues
An optimal RAM implementation of catenable min double-ended queues
Real-Time Simulation of Multihead Tape Units
New Real-Time Simulations of Multihead Tape Units
An example of hierarchical design and proof
The Art of Computer Programming Volumes 1-3 Boxed Set
The Science of Programming
Anatomy of LISP
Simple Confluently Persistent Catenable Lists (Extended Abstract)
Fully Persistent Arrays (Extended Array)
Amortization, lazy evaluation, and persistence
Real-time simulation of concatenable double-ended queues by double-ended queues (Preliminary Version)
A programming and problem-solving seminar
The efficient implementation of very-high-level programming language constructs
Persistent data structures
--CTR
Amos Fiat , Haim Kaplan, Making data structures confluently persistent, Proceedings of the twelfth annual ACM-SIAM symposium on Discrete algorithms, p.537-546, January 07-09, 2001, Washington, D.C., United States
Amos Fiat , Haim Kaplan, Making data structures confluently persistent, Journal of Algorithms, v.48 n.1, p.16-58, August
George Lagogiannis , Yannis Panagis , Spyros Sioutas , Athanasios Tsakalidis, A survey of persistent data structures, Proceedings of the 9th WSEAS International Conference on Computers, p.1-6, July 14-16, 2005, Athens, Greece | purely functional queues;queue;double-ended queue;purely functional data structures;data structures;stack;catenation;persistent data structures |
324179 | Linear hash functions. | Consider the set H of all linear (or affine) transformations between two vector spaces over a finite field F. We study how good H is as a class of hash functions, namely we consider hashing a set S of size n into a range having the same cardinality n by a randomly chosen function from H and look at the expected size of the largest hash bucket. H is a universal class of hash functions for any finite field, but with respect to our measure different fields behave differently.If the finite field F has n elements, then there is a bad set S F2 of size n with expected maximal bucket size H(n1/3). If n is a perfect square, then there is even a bad set with largest bucket size always at least n. (This is worst possible, since with respect to a universal class of hash functions every set of size n has expected largest bucket size below n however, we consider the field of two elements, then we get much better bounds. The best previously known upper bound on the expected size of the largest bucket for this class was O(2 log n). We reduce this upper bound to O(log n log logn). Note that this is not far from the guarantee for a random function. There, the average largest bucket would be &THgr;(log n/ log log n).In the course of our proof we develop a tool which may be of independent interest. Suppose we have a subset S of a vector space D over Z2, and consider a random linear mapping of D to a smaller vector space R. If the cardinality of S is larger than c&egr;|R|log|R|, then with probability 1 - &egr;, the image of S will cover all elements in the range. | Introduction
Consider distributing n balls in s buckets, randomly and independently. The resulting distribution
of the balls in the buckets is the object of occupancy theory.
Dep. of Math., Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv, Israel and Institute for Advanced
Study, Princeton, NJ 08540. Research supported in part by a USA-Israeli BSF grant, by the Sloan Foundation grant
No. 96-6-2, by an NEC Research Institute grant and by the Hermann Minkowski Minerva Center for Geometry at
Tel Aviv University. E-mail: noga@math.tau.ac.il.
y Fakultat fur Informatik und Automatisierung, Technische Universitat Ilmenau, Postfach 100565, 98684 Ilmenau,
Germany. This work was done while the author was a-liated with the University of Dortmund. Partially supported
by DFG grant Di 412/5-1. E-mail: Martin.Dietzfelbinger@theoinf.tu-ilmenau.de.
z BRICS, Centre of the Danish National Research Foundation, University of Aarhus, Ny Munkegade, Aarhus,
Denmark. Supported by the ESPRIT Long Term Research Programme of the EU under project number 20244
(ALCOM-IT). E-mail: bromille@brics.dk. Part of this work was done while the author was at the University of
Toronto.
x IBM Haifa Research Lab, MATAM, Haifa 31905, Israel. E-mail: erezp@haifa.vnet.ibm.com. Most of this work
was done while the author was visiting the University of Toronto.
{ Renyi Institute of the Hungarian Academy of Sciences, Pf. 127, Budapest, H-1364 Hungary. Partially supported
by DIMACS Center and the grants OTKA T-020914, T-030059 and FKFP 0607/1999. E-mail: tardos@cs.elte.hu.
Part of this work was done while the author was visiting the University of Toronto and the Institute for Advanced
Study, Princeton.
In the theory of algorithms and in complexity theory, it is often necessary and useful to consider
putting balls in buckets without complete independence. More precisely, the following setting is
studied: A class H of hash functions, each mapping a universe U to f1; xed. A set
U to be hashed is given by an adversary, a member h 2 H is chosen uniformly at random, S is
hashed using h, and the distribution of the multi-set fh(x)jx 2 Sg is studied. If the class H is the
class of all functions between U and f1; we get the classical occupancy problems. Carter
and Wegman dened a class H to be universal if
1=s:
We remark that a stricter denition is often used in the complexity theory literature.
For universal families, the following properties are well known; variations of them have been
used extensively in various settings:
1. If S of size n is hashed to n 2 buckets, with probability more than 1=2, no collision occurs.
2. If S of size 2n 2 is hashed to n buckets, with probability more than 1=2, every bucket receives
an element.
3. If S of size n is hashed to n buckets, the expected size of the largest bucket is less than p
n+ 1The intuition behind universal hashing is that we often lose relatively little compared to using a
completely random map. Note that for the property 1, this is true in a very strong sense; even with
complete randomness, we do not expect o(n 2 ) buckets to su-ce (the birthday paradox), so nothing
is lost by using a universal family instead. The bounds in the second and third properties, however,
are rather coarse compared to what one would get with complete randomness. For property 2, with
complete randomness, (n log n) balls would su-ce to cover the buckets with good probability
(the coupon collector's theorem), i.e. a polynomial improvement over n 2 , and for property 3,
with complete randomness, we expect the largest bucket to have size (log n= log log n), i.e. an
exponential improvement over p n. In these last cases we do seem to lose quite a lot compared to
using a completely random map and better bounds would seem desirable. However, it is rather
easy to construct (unnatural) examples of universal families and sets to be hashed showing that
size (n 2 ) is necessary to cover n buckets with non-zero probability, and that buckets of size
are in general unavoidable, when a set of size n is hashed to n buckets. This shows that the abstract
property of universality does not allow for stronger statements. Now x a concrete universal family
of hash functions. We ask the following question: To which extent are the ner occupancy properties
of completely random maps preserved?
We provide answers to these questions for the case of linear maps between two vector spaces
over a nite eld, a natural and well known class of universal (in the sense of Carter and Wegmen)
hash functions. The general
avor of our results is that \large elds are bad", in the sense that the
bounds become the worst possible for universal families, while \small elds are good" in the sense
that the bounds become as good or almost as good as the ones for independently distributed balls.
More precisely, for the covering problem, we show the following (easy) theorem.
Theorem 1 Let F be a eld of size n and let H be the class of linear maps between F 2 and F .
There is a subset S of F 2 of size (jF j 2 ), so that for no h
On the other hand, we prove the following harder theorem.
Theorem 2 Let S be a subset of a vector space over Z 2 and choose a random linear map to a
smaller vector space R. If jSj c jRj log jRj then with probability at least 1 the image of S
covers the entire range R.
For the \largest bucket problem", let us rst introduce some notation: Let U be the universe
from which the keys are chosen. We x a class H of functions mapping U to sg. Then, a
set S U of size n is chosen by an adversary, and we uniformly at random pick a hash function
using h and look at the size of the largest resulting hash bucket. We denote the
expectation of this size by L s
n . Formally,
SU;jSj=n
Usually we think of s being of size close to n. Note that if
universal class yields
The class H we will consider is the set of linear maps between F m ! F k for m > k. Here F is
a nite eld and This class is universal for all values of the parameters.
s the expected largest bucket can be large.
Theorem 3 Let F be a nite eld with jF s. For the class H of all linear transformations
s
s 1=3
Furthermore if jF j is a perfect square we have
s (H) >
s:
Note how close our lower bound for quadratic elds is to the upper bound of
that holds
for every universal class. We also mention that for the bad set we construct in Theorem 8 for a
quadratic eld there is no good linear hash function, since there always exists a bucket of size at
least p s.
When the eld is the eld of two elements, the situation is completely dierent. Markowsky,
Carter and Wegman [MCW78] showed that for this case L s
s
[MV84] improved on this result (although this is implicit in their paper) and showed that L s
s
O(2
log s ). We further improve the bound and show that:
Theorem 4 For the class H of all linear transformations between two vector spaces over Z 2 ,
s
Furthermore, we also show that even if the range is smaller than jSj by a logarithmic factor, the
same still holds:
Theorem 5 For the class H of all linear transformations between two vector spaces over Z 2 ,
log s
Note that even if one uses the class R of all functions one obtains only a slightly better result:
the expected size of the largest bucket in this case is L s
log s) and L s
s log s
(log s), which is the best possible bound for any class. Interestingly, our upper bound is based on
our upper bound for the covering property.
We do not have any non-trivial lower bounds on L s
s for the class of linear maps over Z 2 , i.e., it
might be as good as O(log s= log log s). We leave this as an open question.
1.1 Motivation
There is no doubt that the method of implementing a dictionary by hashing with chaining, recommended
in textbooks [CLR90, GBY90] especially for situations with many update operations, is a
practically important scheme.
In situations in which a good bound on the cost of single operations is important, e. g., in
real-time applications, the expected maximal bucket size as formed by all keys ever present in the
dictionary during a time interval plays a crucial role. Our results show that, at least as long as the
size of the hash table can be determined right at the start, using a hash family of linear functions
over Z 2 will perform very well in this respect. For other simple hash classes such bounds on the
worst case bucket size are not available, or even fail to hold (see example in Section 4); other, more
sophisticated hash families [S89, DM90, DGMP92] that do guarantee small maximal bucket sizes
consist of functions with higher evaluation time. Of course, if worst case constant time for certain
operations is absolutely necessary, the known two-level hashing schemes can be used, e. g., the FKS
scheme [FKS84] for static dictionaries; dynamic perfect hashing [DKMHRT94] for the dynamic case
with constant time lookups and expected time O(n) for n update operations; and the \real-time
dictionaries" from [DM90] that perform each operation in constant time, with high probability. It
should be noted, however, that a price is to be paid for the guaranteed constant lookup time in
the dynamic schemes: the (average) cost of insertions is signicantly higher than in simple schemes
like chained hashing; the overall storage requirements are higher as well.
1.2 Related work
Another direction in trying to show that a specic class has a good bound on the expected size of
the largest bucket is to build a class specically designed to have such good property.
One immediate such result is obtained by looking at the class H of d-degree polynomials over
nite elds, where log n= log log n (see, e.g., [ABI86].) It is easy to see that this class maps
each d elements of the domain independently to the range, and thus, the bound that applies to the
class of all functions also applies to this class. We can combine this with the following well known
construction, found in, e.g., [FKS84], and sometimes called \collapsing the universe": There is a
class C of size 2 (log n+log log jU containing functions mapping U to g, so that, for any
set S of size n, a randomly chosen map from C will be one-to-one with probability 1 O(1=n k ).
The class consisting of functions obtained by rst applying a member of C, then a member of H
is then a class with L n
log log n) and size 2 O(log log jU j+log 2 n= log log n) and with evaluation
time O(log n= log log n) in a reasonable model of computation, say, a RAM with unit cost operations
on members of the universe to be hashed.
More e-cient (but much larger) families were given by Siegel [S89] and by Dietzfelbinger and
Meyer auf der Heide [DM90]. Both provide families of size jU j n
such that the functions can be
evaluated in O(1) time on a RAM and with L n
log log n). The families from [S89] and
[DM90] are somewhat complex to implement while the class of linear maps requires only very basic
bit operations (as discussed already in [CW79]). It is therefore desirable to study this class, and
this is the main purpose of the present paper.
1.3 Notation
If S is a subset of the domain D of a function h we use h(S) to denote Sg. If x is
an element of the range we use h 1 (x) to denote fs 2 D j xg. If A and B are subsets
of a vector space V and x 2 V we use the notations A
Ag. We use Z 2 to denote the eld with 2 elements. All logarithms in this
paper are base two.
2 The covering property
2.1 Lower bounds for covering with a large eld
We prove Theorem 1. Take any set A F of size
Ag. S has density around one quarter. To see this, note that if x
and y are picked randomly and independently in F , (x=y; (x 1)=y) has the same distribution as
linear To see this take a nonzero linear
by and note that if 0 2 g(S) then a 6= 0 and b=a 2 A but in this case
a 62 g(S).
2.2 Upper bounds for covering with a small eld - the existential case
We start by showing that if we have a subset A of a vector space over Z 2 and jAj is su-ciently
larger than another space W then there exists a linear transformation T mapping A to the entire
range . The constant e below is the base of the natural logarithm.
Theorem 6 Let A be a nite set of vectors in a vector space V of an arbitrary dimension over Z 2
and let t > 0 be an integer. If jAj > t2 log e then there exists a linear
, so that
maps A onto Z t
.
For the proof of this theorem we need the following simple lemma. Note that although we state
the lemma for vector spaces, it holds for any nite group.
Lemma 2.1 Let V be a nite vector space, A V ,
Proof. If v and u are both chosen uniformly independently at random from V then both events
and u 62 v +A have probability and they are independent. 2
Proof of Theorem 6. Let m be the dimension of V ,
Starting with A choose a vector so that for A
Such a choice for v 1 exists by Lemma 2.1. Then, by the same procedure, we choose a v 2 so that for
and so on up to A
Note that one can assume that the vectors v are linearly independent since choosing a
vector v i which linearly depends on the vectors formerly chosen makes A
g. We have A
and A were disjoint, a contradiction as jx
We choose an onto linear
2 such that its kernel T 1 (0) equals W . As T (W
we have
2 as claimed. 2
The bound in Theorem 6 is asymptotically tight as shown by the following proposition.
Proposition 2.2 For every large enough integer t there is a set A of at least (t 3 log t)2
vectors in a vector space V over Z 2 so that for any linear map
does not map A
.
let A be chosen at random by picking each element
of V independently and with probability into the set with
From Chebyshev's inequality we know that with probability at least 3=4, A has cardinality at
least pN 2
pN for Using p > x= log e x 2 =(2 log 2 e) one can show that this is as
many as claimed in the proposition. Let us compute the probability that there exists a linear map
2 such that T maps A onto Z t
. There are 2 t(t+s) possible maps T and each of them
with probability at most 1 (1 p) 2 s 2 t
So with probability almost 3=4, A is not small and still no T maps A onto Z t
2.3 Choosing the linear map at random
In this subsection we strengthen Theorem 6 and prove that if A is bigger than what is required
there by only a constant factor, then almost all choices of the linear transformation T work. This
may seem immediate at rst glance since Lemma 2.1 tells us that a random choice for the next
vector is good on average. In particular, it might seem that for a random choice of v 1 and v 2 in the
proof of Theorem 6,
. Unfortunately this is not the case:
For example, think of A being a linear subspace containing half of V . In this case, the ratio of
points that are not covered is 1=2. As random vectors v i are chosen to be added to A, vectors in
A are chosen with probability 1=2. Thus, after i steps, remains 1=2 with probability 1=2 i and
becomes 0 otherwise. Thus, the expected value of i is 2 i 1 which is much bigger than
Our rst lemma is technical in nature.
Lemma 2.3 Let i for 1 i k be random variables and let 0 < 0 < 1 be a constant. Suppose
that for 0 i < k we have 0 i+1 i and conditioned on any set of values for
have
. Then for any threshold 0 < t < 1 we have
Proof: The proof is by induction on k. The base case is trivial.
We assume the statement of the lemma for k and prove it for k + 1. Let log log(1=t).
We may suppose c since otherwise the bound in the lemma is greater than
1.
After the choice of 1 , the rest of the random variables form a random process of length k
satisfying the conditions of the lemma (unless thus we can apply the inductive hypothesis
to get
where we dene f in the same
interval and clearly an upper bound on Prob[ k+1 t j 1 ].
We claim that in the interval 0 x 0 we have f(x) f 0 ( 0 )x= 0 . To prove this simply
observe that f 0 (x)=x is rst increasing then decreasing on (0; 1). To see this compute the derivative
. If 0 is still in the increasing phase then we have
Suppose now that 0 is already in the decreasing
phase and dene x
. Notice that we assumed 0 x 0 in the beginning of the proof, so
we have f us dene x
and notice that we have
and only if x x 00 . It is easy to check that x 00 must still be in the increasing phase of f 0 (x)=x thus
we have . For x 00 x < 1 we simply have
. Thus we must have f(x)=x 1=x
We have thus proved the claim in all cases for 0 < x 0 . The claim is trivial for
Using the claim we can nish the proof writing:
remark that the bound in the lemma is achievable for
0 with an integer 0 j k.
The optimal process has
Theorem 7 a) For every > 0 there is a constant c > 0 such that the following holds. Let A be a
nite set of vectors in a vector space V of an arbitrary dimension over Z 2 , let t > 0 be an integer.
If jAj c t2 t then for a uniform random linear transformation
b) If A is a subset of the vector space Z u
2 of density jAj=2 is an integer
then for a uniform random onto linear transformation
Proof: We start with proving part b) of the theorem. In order to pick the onto map T we use
the following process (similar to the one in the proof of Theorem 6). Pick vectors
s uniformly at random from the vectors in Z u
2 and choose T to be a random onto linear
2 with the constraints T (v i.e. the vectors v
are in the kernel of T . Note that the v i 's are not necessarily linearly independent and that they
do not necessarily span the kernel. Still, the transformation T is indeed distributed uniformly at
random amongst all onto linear maps of Z u
.
Using notations similar to the ones used in the proof of Theorem 6, let A
nonnegative and monotone
decreasing in i with . The equation E[
i is guaranteed by Lemma 2.1
since A independent of j for j i. Thus all the conditions of
Lemma 2.3 are satised and we have
By the denition of s the right hand side here is equal to the estimate in the theorem. Finally note
that (as in the proof of Theorem
2 since for x 2 Z t
the sets
were disjoint with sizes 2 u t and (1 s )2 u > 2 u 2u t, a contradiction. Thus
we have the claimed upper bound for the probability that T (A) 6= Z t
.
Now we turn to part a) of the theorem and prove it using part b). Part a) is about a random
linear transformation, not necessarily onto, but this dierence from the claim just proved poses
less of a problem, the di-culty is that we do not have an a priori bound on 1 jAj=jV j. In fact,
this ratio can be arbitrarily small. To solve this, we choose the transformation T in two steps, the
rst step ensuring that the density of the covered set is substantial, then applying part b) for the
second step.
. First, we pick uniformly at
random a linear transformation we pick a random onto linear map
2 , and set This results in a uniformly chosen linear
2 . This is
true even for a xed onto T 1 and a random T 0 , since the values T 0
are independent and uniformly distributed in W , thus the values T (e i ) are also independent and
uniformly distributed in Z t
.
Any pair of vectors v 6= w 2 A collide (due to T 0 ) with probability Prob[T 0
Thus the expected number of collisions is jAj
=jW j. Since jT 0 (A)j jAj=2 implies at least jAj=2
such collisions, Markov's inequality gives Prob[jT 0 (A)j jAj=2] 2 jAj
For any xed T 0 , part b) of the theorem gives
In case jT 0 (A)j > jAj=2 we have < 1 jAj=(2jW
using the monotonicity of the bound above we get
Choosing c we have that jAj c t2 t implies
(4=) log(2=). This implies that the bound in Equation 1 is less than =2, thus we get Prob[T (A) 6=
We remark that a more careful analysis gives c that is a small polynomial of 1=.
3 The largest bucket
3.1 Lower bound for the largest bucket with a large eld
We start by showing why linear hashing over a large nite eld is bad with respect to the expected
largest bucket size measure. This natural example shows that universality of the class is not enough
to assure small buckets. For a nite eld F we prove the existence of a bad set S F 2 of size
such that the expected largest bucket in S with respect to a random linear
is big. We prove the results in Theorem 3 separately for quadratic and non-quadratic elds.
We start with an intuitive description of the constructions. Linear hashing of the plane collapses
all straight lines of a random direction. Thus, a bad set in the plane must contain many points on
at least one line in many dierent directions. It is not hard to come up with bad sets that contain
many points of many dierent lines, however the obvious constructions (subplane or grid) yield sets
where many of the \popular lines" tend to be parallel and thus they only cover a few directions.
This problem can be solved by a projective transformation: the transformed set has many popular
lines, but they are no longer parallel.
For the non-quadratic case, it is convenient to explicitly use the concept of the projective
plane over a eld F . Recall that the projective plane P over F is dened as
where is the equivalence relation (x; The a-ne plane F 2 is
embedded in P by the one-to-one map (x; y) 7! (x; 1). A line in P is given by an equation
projective line corresponds to a plane in F 3 containing the origin.
All projective lines are extensions (by one new point) of lines in the a-ne plane F 2 , except for
the ideal line, given by f(x; 0g. A projective transformation mapping the ideal line to
another projective line L is a map ~
obtained as the -quotient of a nonsingular linear
mapping the plane corresponding to the ideal line into the plane corresponding
to L.
Projective geometry is useful for understanding the behavior of linear hash functions due to the
following fact which is easily veried: Picking a random non-trivial linear map F 2 ! F as a hash
function and partitioning a subset S F 2 into hash buckets accordingly, corresponds exactly to
picking a random point p on the ideal line and partitioning the points of S according to which line
through p they are on. This observation will be used explicitly in the proof of Theorem 9.
Theorem 8 Let F be a nite eld with jF j being a perfect square. There exists a set S F 2 of
size such that for every linear has a large bucket, i.e. there exists a
value y 2 F with jh 1 (y)j
jF j.
Proof. We have a nite eld F 0 of which F is a quadratic extension. Let jF
a be an arbitrary element in F n F 0 and dene
g.
Note that also, that S is the image of the subplane F 2
0 under the projective
transformation (x; y) 7! ( 1
Fix F and consider the function h dened by h(x; By. We must
show that there is some C 2 F such that jh 1 (C) \ Sj m. If maps all the m
elements of S to needed. Otherwise, we claim that there
is a C 2 F such that both C
B and aC A
are in F 0 . To see this observe that if g 1 and g 2 are two
distinct members of F 0 , then ag 1 and ag 2 lie in distinct additive cosets of F 0 in F , since otherwise
their dierence, a(g 1 would have to be in F 0 , contradicting the fact that a 62 F 0 . Thus, as
ranges over all members of F 0 , ag intersects distinct additive cosets of F 0 in F , and hence aF 0
intersects all those cosets. In particular, there is some g 2 F 0 so that ag 2 F
implying that
the assertion of the claim. For the above C, dene
it follows that y(x) 2 F 0 for every x 2 F 0 . We have now A 1
a+x +B y(x)
showing that h maps
all the m elements of S
Theorem 9 Let F be a nite eld. There exists a set S F 2 of size such that for more
than half of the linear maps has a large bucket, i.e. there exists a value y 2 F with
Proof. First we construct a set S 0 F 2 such that jS 0 j jF and there are n distinct lines
in the plane F 2 each containing at least m n 1=3 =3 points of S 0 .
Let us rst consider the case when n is a prime, so F consists of the integers modulo n. We
ng and consider the square grid S A. Clearly jS 0 j < n. It is well
known that each of the n most popular lines contains at least m n 1=3 =3 points of S 0 . This is
usually proved for the same grid in the Euclidean plane (see e.g. [PA95], pp. 178{179) but that
result implies the same for our grid in F 2 .
Now let be the subeld in F of p elements. Let x 2 F be a primitive element,
then every element of F can be uniquely expressed as a polynomial of x of degree below k with
coe-cients from F 0 . Let k
and let A
where the polynomials f have coe-cients from F 0 . Finally we take
n. For a 2 A 1 and b 2 A 2 we consider the line L ay
. Notice that there are n such lines and we have ay
we have n distinct lines each containing points of S 0 . We have m n 1=3 as claimed
unless k 1 (mod 3). Notice that for k 2 (mod our m is much higher than n 1=3 . For the
bad case k 1 (mod 3) we apply the construction below instead.
Finally suppose is a prime and k 1 (mod 3). To get our set S 0 in this case we
have to merge the two constructions above. Let F 0 be the p element subeld of F , then F 0 consists
of the integers modulo p. We set
pg. Let k
and let x 2 F be a primitive element, so we can express any element of F uniquely as a polynomial
of x of degree less then k with coe-cients from F 0 . We set A
Ag where the polynomials f have coe-cients from F 0 . Finally
we set S g. Let
a and b be polynomials with coe-cients from F 0 with deg(a) < k 1 and deg(b) < k 2 . Consider
the line L Fg. We now compute the value of jL a;b \ S 0 j. Note that
a point (y; a(x)y + b(x)) of L a;b is in S 0 if and only if polynomial f so that
A. The number of such polynomials f is exactly
exactly p k1 1 jL a(0);b(0) \(AA)j. Thus, from knowing
that the p most popular lines in F 2
0 contain at least m 0 p 1=3 =3 points from A A we conclude
that there exist n distinct lines each containing at least points of S namely,
the lines L a;b for those choices of a and b for which L a(0);b(0) is a popular line in F 2
In all cases we have constructed our set S 0 F 2 of size jS 0 j n with n distinct popular lines
each containing at least m > n 1=3 =3 points of S 0 . Let P be the projective plane containing F 2 .
Out of the n 2 +n+ 1 points in P every popular line covers n+ 1. The ith popular line (1 i n)
can only have i 1 intersections with earlier lines, thus it covers at least n+ 2 i points previously
uncovered. Therefore a total of at least n+2
1 points are covered by popular lines. Simple
counting gives the existence of a line L in P not among the popular lines, such that more than
half of the points on L are covered by popular lines. Let f be a projective transformation taking
the ideal line L to L. We dene
One linear hash function h : F 2 ! F is constant zero (and thus all of S is a single bucket), for
the rest there is a point x h 2 L 0 such that h collapses the points of F 2 of each single line going
through x h , as we observed at the beginning of the section. Furthermore, if the linear non-zero
map is picked at random, all such points x h are equally likely. Thus, the statement of the theorem
follows, if we show that for at least half the points x h on the ideal line, it holds that some line
through x h intersects S in at least n 1=3 =3 1 points. But some line through x h intersects S in at
least n 1=3 =3 1 points if and only if some line through f(x h ) intersects f(S) in at least n 1=3 =3 1
(projective) points. For this, it is su-cient that some line through f(x h ) intersects S 0 in at least
(n 1=3 =3 points (the +1 comes from the possibility of f(x h
line through f(x h ) is popular, in the sense we used above. But by denition of f , this is true for
at least half of the points x h on the ideal line, and we are done. 2
3.2 Upper bound for the largest bucket with a small eld
Let us now recall and prove our main result.
For convenience here we speak about hashing n log n keys to n values. Also, we assume that n
is a power of 2.
Theorem 5: Let H be the class of linear transformations between two vector spaces over Z 2 , then
log n log log n):
This theorem implies Theorem 4.
We have to bound the probability of the event that many elements in the set S are mapped to
a single element in the range. Denote this bad event by E 1 . The overall idea is to present another
(less natural) event E 2 and show that the probability of E 2 is small, yet the probability of E 2 given
big. Thus, the probability of E 1 must be small. We remark here that a somewhat similar line
of reasoning was used in the seminal paper of Vapnik and Chervonenkis [VC71].
For the proof we x the domain to be
2 , the range (the buckets) to be
S D of size
Let us choose arbitrary ' log n and consider the space
2 . We construct the linear
through the intermediate range A in the following way. We choose
uniformly at random a linear transformation and uniformly at random an onto linear
transformation . Note that as mentioned in the proof of part
a) of Theorem 7 this yields an h which is uniformly chosen from among all linear transformations
from D to B.
Let us x a threshold t. We dene two events. E 1 is the existence of a bucket of size at least t:
There exists an element 2 B such that
> t:
We are going to limit the probability of E 1 through the seemingly unrelated event
There exists an element 2 B such that
Consider the distribution space in which h 1 and h 2 are uniformly chosen as above. We shall
show that
Proposition 3.1 If
log d log log d :
Proposition 3.2 If t > c 1=2 (2 ' =n) log(2 ' =n) (with c 1=2 from Theorem 7a)) then
From Propositions 3.1 and 3.2 we deduce that the probability of E 1 must be small:
Corollary 3.3 There is a constant C, so that for all r > 4 and every power of two n, the following
holds: If a subset S of size log n of a vector space over Z 2 is hashed by a random linear
transformation to Z log n
2 , we have
Prob[maximum bucket size > rC log n log log n] 2(r= log r) log(r= log r) log log(r= log r) :
Proof: Given r > 4, let l = blog n+log log n+log r log log r +1c and let log log n:
Letting log n), we have log n+log log n+log r log log r =(n log n) = r= log r >
1 and 2 l =n 2 log n+log log n+log r log log r+1 log n(r= log r), so
c 1=2 (2 l =n) log(2 l =n) < c 1=2 (2 log n(r= log r))(1 log log n log r)
log n(r= log r)(2 log log n log r)
log log n
so the conditions of Proposition 3.1 and 3.2 are satised, and, combining their conclusions, we get
log d log log d :
But the event E 1 is the event that the biggest bucket is bigger than log log n and
since d r= log r, the statement of the corollary follows, by putting
Let us now prove the propositions above.
Proof of Proposition 3.1: Note rst that an alternative way to describe E 2 is
We will prove that Proposition 3.1 holds for any specic h 1 , and thus it also holds for a randomly
chosen h 1 . So x h 1 and consider the distribution in which h 2 is chosen uniformly amongst all full
rank linear transformation from A to B.
We use part b) of Theorem 7 for the set A n h 1 (S) A. Its density is clearly 1 for
1=d. Thus the theorem gives log log n+log log(1=)
d log d log log d as claimed. 2
Proof of Proposition 3.2: Fix h for which E 1 holds, and x any full rank h 2 . We will show that
the probability of event E 2 is at least 1=2 even when these two are xed and thus the conditional
probability is also at least 1=2.
Now since E 1 holds there is a subset S 0 S of cardinality at least t mapped by h to a single
element 2 Z log n
. Fix this and dene
(). Consider the distribution
of h 1 satisfying . When we restrict h 1 to D 0 , we get that the distribution implied by
such h 1 is a uniform choice of an a-ne or linear map from D 0 into A 0 (we show this in Proposition
3.4 below). For event E 2 to hold it is enough to have A 0 h 1 (S). We will show that h 1 (S 0 ) covers
all the points in A 0 with probability at least 1=2 and thus we get that event E 2 happens with
probability 1=2. Since h 2 is onto we have jA On the other hand, D 0 \S has cardinality at
least a) of Theorem 7, the probability that a set of cardinality
t mapped by a random linear transformation will cover a range of cardinality 2 ' =n is at least 1=2.
(Note that Theorem 7, part a) clearly applies to a random a-ne transformation too.) 2
At this point, we have proven Corollary 3.3. This limits the probability of large buckets with
linear hashing. It is straightforward to deduce Theorem 5 from that corollary:
Proof of Theorem 5: L n
log n is the expectation of the distribution of the largest bucket size.
Corollary 3.3 limits the probability of the tail of this distribution, thus yielding the desired bound
on the expectation. The constant C is from Corollary 3.3 and we set log log n.
4K +Z
4K +KZ2(r= log r) log(r= log r) log log(r= log r) dr
log log n):In order for the paper to be self-contained we include a proof of the simple statement about
random linear transformations used above.
Proposition 3.4 Let D, A and B be vector spaces over Z 2 . Let h : be an arbitrary linear
map, and let be an arbitrary onto linear map. Let be any point in B and denote
choosing a uniform linear map A such that
restricting the domain to D 0 we get a uniformly chosen linear map from D 0 to A 0
uniformly chosen a-ne map from D 0 to A 0 otherwise.
Proof: Consider D 0
(0). Let us choose a complement space D 1 to
us call x the unique vector in D 0 \ D 1 .
We have linear transformation A is determined by its two restrictions
A. Clearly the uniform random choice of h 1 corresponds to uniform
and independent choices for h 0 and h 00 . The restriction means that h 0 (D 0 ) A 0 and
is the restriction of h to D 1 . Thus, after the restriction the random choices of h 0 and h 00
are still independent. Note now that if then the restriction of h 1 in question is exactly
to note that the restriction
in question is again h 0 , this time translated by the random value h 00 (x) 2 A 0 . 2
4 Remarks and open questions
We have discussed the case of a very small eld (size 2) and a very large eld (size n). What
happens with intermediate sized elds? Some immediate rough generalizations of our bounds are
the following: If we hash an adversely chosen subset of F m of size by a randomly
chosen linear map, the expected size of the largest bucket is at most O((log n log log n) log jF j ) and
at
least
bounds should be possible.
Another question is which properties other well known hash families have. Examples of
the families we have in mind include: Arithmetic over Z p [CW79, FKS84] (with h a;b
b mod p) mod n), integer multiplication [DHKP97, AHNR95] (with h a
Boolean convolution [MNT93] (with h a projected to some subspace).
An example of a natural non-linear scheme for which the assertion of Theorem 6 fails is the
scheme that maps integers between 1 and p, for some large prime p, to integers between 0 and n 1
mapping x 2 Z p to (ax are two randomly chosen
elements of Z p . For this scheme, there are primes p and choices of n and a subset S of cardinality
log n log log log n) of Z p , which is not mapped by the above mapping onto [0; n 1] under any
choice of a and b.
To see this, let p be a prime satisfying p 3 (mod 4) and consider the set
of all quadratic residues modulo p. Note that for every nonzero element a 2 Z p , the set aS ( mod p)
is either the set of all quadratic residues or the set of all quadratic non-residues modulo p. The
main result of Graham and Ringrose [GR90] asserts that for innitely many primes p, the smallest
quadratic nonresidue modulo p is at least
ast p log log log p) (this result holds for primes p
as well, as follows from the remark at the end of [GR90]). Since for such primes p,
1 is a quadratic nonresidue, it follows that for the above S and for any choice of a; b 2 Z p ,
the set aS (computed in Z p ) avoids intervals of length at least
ast p log log log p). Choosing
log log log p for an appropriate (small) constant c, and dening
that
log n log log log n) is not mapped onto [0; n 1] under any choice of a
and b.
A nal question is whether there exists a class H of size only 2 O(log log jU j+log n) and with L n
O(log n= log log n). Note that linear maps over Z 2 , even combined with collapsing the universe, use
O(log log bits while the simple scheme using higher degree polynomials uses
O(log log log log n).
Acknowledgment
We thank Sanjeev Arora for helpful remarks.
--R
A fast and simple randomized parallel algorithm for the maximal independent set problem.
Sorting in linear time?
Universal classes of hash functions
Introduction to Algorithms
A reliable randomized algorithm for the closest-pair problem
Polynomial hash functions are reliable
Handbook of Algorithms and Data Structures
Lower bounds for least quadratic nonresidues
Analysis of a universal class of hash functions
Randomized and deterministic simulations of PRAMs by parallel machines with restricted granularity of parallel memories.
The computational complexity of universal hashing.
Combinatorial Geometry
On universal classes of fast high performance hash functions
On the uniform convergence of relative frequencies of events to their probabilities
--TR
Storing a Sparse Table with <italic>0</italic>(1) Worst Case Access Time
Randomized and deterministic simulations of PRAMs by parallel machines with restricted granularity of parallel memories
A fast and simple randomized parallel algorithm for the maximal independent set problem
Introduction to algorithms
The computational complexity of universal hashing
Dynamic Perfect Hashing
Sorting in linear time?
A reliable randomized algorithm for the closest-pair problem
Polynomial Hash Functions Are Reliable (Extended Abstract)
--CTR
Dahlia Malkhi , Moni Naor , David Ratajczak, Viceroy: a scalable and dynamic emulation of the butterfly, Proceedings of the twenty-first annual symposium on Principles of distributed computing, July 21-24, 2002, Monterey, California
Beate Bollig , Stephan Waack , Philipp Woelfel, Parity graph-driven read-once branching programs and an exponential lower bound for integer multiplication, Theoretical Computer Science, v.362 n.1, p.86-99, 11 October 2006
Beate Bollig , Philipp Woelfel, A read-once branching program lower bound of (2n/4) for integer multiplication using universal hashing, Proceedings of the thirty-third annual ACM symposium on Theory of computing, p.419-424, July 2001, Hersonissos, Greece | universal hashing;hashing via linear maps |
324266 | Secrecy by typing in security protocols. | We develop principles and rules for achieving secrecy properties in security protocols. Our approach is based on traditional classification techniques, and extends those techniques to handle concurrent processes that use shared-key cryptography. The rules have the form of typing rules for a basic concurrent language with cryptographic primitives, the spi calculus. They guarantee that, if a protocol typechecks, then it does not leak its secret inputs. | Introduction
Security is an elusive cocktail with many rare ingredients. For any given security
protocol, one may want properties of integrity, confidentiality, availability, various
forms of anonymity and non-repudiation, and more. Seldom does a protocol
achieve all the properties that its designers intended. Moreover, even when a
protocol is sound, it is often delicate to define all its important properties, and
difficult to prove them.
The tasks of protocol design and analysis can be simplified by having principles
that make it easier to achieve particular security objectives, in isolation,
and rules that help recognize when these objectives have been achieved. For ex-
ample, if we wish to obtain some availability properties and are concerned about
denial of service attacks, we may design our protocols in such a way that there
is an obvious, small bound on the amount of work that any principal will do in
response to any message. One could relax this bound for messages from trusted,
authenticated principals; but trust and authentication are not always correct, so
letting availability depend on them should not be done lightly.
In this paper, we develop informal principles and formal rules for achieving
secrecy properties in security protocols. These principles and rules are based
on traditional concepts of classification and information flow [Den82,Gas88], extended
to deal with concurrent processes that use shared-key cryptography. In
particular, in analyzing a protocol, we label each piece of data and each communication
channel as either secret or public. Secret data should not be sent on
public channels, and secret channels should not be made available indiscriminately
In our approach, encryption keys are pieces of data, and as such they are
labelled. Secret data can be made public-declassified-by encryption under a
secret key. However, this declassification cannot be done without some additional
precautions. For example, given a secret bit b and a secret key K, we cannot
simply publish b under K if we may also publish 0 or 1 under K: an attacker
could deduce the value of b by comparing ciphertexts. The rules of this paper
capture a sufficient set of simple precautions that permit this declassification.
The rules have the form of typing rules for a basic concurrent language, the
calculus [AG97a]; this calculus is an extension of the pi calculus [MPW92]
with shared-key cryptographic primitives. The purpose of these rules is rather
different from those of standard typing rules for related languages (such as those
of Pierce and Sangiorgi [PS96]). The rules guarantee that, if a protocol type-
checks, then it does not leak its secret inputs. This secrecy is obtained independently
of any other feature (or any flaw) of the protocol. For example, the rules
may guarantee that a protocol does not leak the contents of certain messages,
without concern for whether message replay is possible. The notion of leaking is
formalized in terms of testing equivalence [DH84,BN95]: roughly, a process P (x)
does not leak the input x if a second process Q cannot distinguish running in
parallel with P (M) from running in parallel with P (M 0 ), for every M and M 0 .
These typing rules are helpful, in that following them may lead to designs
that are clearer, more robust, and more likely to be sound. Furthermore, they
enable us to give simple proofs of properties that would be hard to establish from
first principles. However, the secrecy theorems that we obtain should be taken
with a grain of salt: they are dependent on the choice of a particular model, that
of the spi calculus. This model is fairly accurate and expressive, but does not
take into account issues of key length, for example.
The next section explains, informally, our approach for achieving secrecy. Section
3 is a review of the spi calculus. (The spi calculus presented is a generalization
of that defined originally [AG97a]; it includes polyadic constructs [Mil91].)
Section 4 provides the typing rules for the spi calculus, and Section 5 shows that
these rules can be applied to prevent undesirable flows of information. Section 6
illustrates the use of the rules in examples. Finally, Section 7 discusses some
conclusions.
Throughout this paper, we often opt for simplicity over generality, considering
this work only a first exploration of a promising approach. In particular, we take
a binary, Manichaean view of secrecy. According to this view, the world is divided
into system and attacker, and a secret is something that the attacker does not
have. A more sophisticated view of secrecy would distinguish various principals
within the system, and enable us to discuss which of these principals have a given
piece of data. In our binary view, however, we can vary the boundary between
system and attacker, with some of the same benefits.
Some Principles for Secrecy
In the security literature there are well-known methods for controlling flows of
information. Typically, these methods rely on putting objects and subjects into
security classes, and guaranteeing that no data flows from higher classes to lower
classes. In some of these methods, security classes are formalized as types, and
the control of flows of information relies on typing rules (e.g., [VSI96]).
We adapt some of that work to security protocols. This section describes
our approach informally, along with the main difficulties that it addresses. It is
probable that some of the basic observations of this section have already been
made, but they do not seem to be documented in the open literature.
2.1 On Rules in Distributed Systems
In a centralized system, an administrator that controls all the hardware in the
system may hope to control all information flows. If all communication is mediated
by the system hardware, the control of all information flows is plausible
at least in principle. In particular, the administrator may check user programs
before running them, statically, or may apply some dynamic checks.
In a distributed system, on the other hand, no single administration may
control all the hardware. No part of the system may be able to check that the
software of any other part is constructed according to a given set of rules. At best,
each principal can analyze the programs that it receives from other principals,
as well as all other messages.
Therefore, whatever rules we propose, they should be such that an attacker
satisfies them vacuously. We cannot expect to restrict the code that the attacker
runs. Our rules should constrain only the principals that want to protect their
secrets from the attacker.
2.2 Preliminaries on Keys and Channels
As mentioned in the introduction, this paper concerns shared-key protocols.
We write fMgK for the result of encrypting M with K, using a shared-key
cryptosystem such as DES [DES77]. With shared-key cryptography, secrecy can
be achieved by communication on public channels under secret keys.
In addition to public channels, on which anyone may communicate, we consider
channels with some built-in protection. We restrict attention to channels on
which the same principals can send and receive. On a single machine, channel
protection can be provided by an operating system that mediates communication
between user processes; in a distributed system, the protection can be
implemented cryptographically.
The ability to communicate on a channel is often determined by possession of
a capability, such as a password or a key. (In the pi calculus and the spi calculus,
the name for a channel is the capability for the channel.) A public channel is
one for which the capability is public; similarly, a secret channel is one for which
the capability is secret.
2.3 Classifying Data
We consider only three classes of data:
Public data, which can be communicated to anyone,
- Secret data, which should not be leaked,
Any data, that is, arbitrary data.
We use the symbols T and R to range over the classes Secret , Public, and Any .
We refer to Secret , Public, and Any as classes, levels, or types; the difference is
one of emphasis at most. It should be possible to generalize our rules to richer
classification structures, with more than three classes; but such a generalization
is not essential for our immediate purposes.
Encryption keys are data, so our classification scheme applies to them. Several
different levels can be associated with a key:
its level as a piece of data,
- the levels of the data that it is used to encrypt,
- the levels of the resulting ciphertexts.
However, not all combinations are possible, as for example a public key should
not be used to turn secret data into public ciphertexts. It is simplest to retain
only one level for each key. Similarly, we associate a single classification with
each channel. We adopt the following principles:
The result of encrypting data with a public key has the same classification
as the data, while the result of encrypting data with a secret key
may be made public.
Only public data can be sent on public channels, while all kinds of data
may be sent on secret channels.
The relation T !: R holds if T equals R or if R is Any . If a piece of data has
level T and T !: R, then the piece of data has level R as well.
Because a piece of data of level Any could be of level Secret , it should not be
leaked. On the other hand, a piece of data of level Any could be of level Public,
so it cannot be used as a secret. For example, it cannot be used as a secret key
for encrypting secret data. Thus, if all we know about a piece of data is that
it has level Any , then we should protect it as if it had level Secret , but we can
exploit it only as if it had level Public. This piece of data is therefore not very
useful for constructing channel capabilities or encryption keys; we find it cleaner
to forbid these uses altogether.
Furthermore, we focus on the case where the classification Secret is given only
to data created fresh within the system that we are trying to protect. However,
other data can be of class Any , and then it must be protected as though it were
of class Secret .
2.4 Classifying Inputs
Whereas each principal may know the classification of the data that it creates,
the principal may not always know the classification of the data that it acquires
through communication with other principals. Data that arrives in clear on public
channels can be presumed to be public. On the other hand, data that arrives
with some protection may be either secret or public.
The participants of protocols typically know how to handle each of the fields
of the encrypted messages that they receive, as in the following example (inspired
by the Needham-Schroeder protocol [NS78]):
Message
Message 3 A
In this protocol, the principals A and B share keys KSA and KSB with the
server S, respectively. The server provides some confidential information I A to
A and I B to B, in response to a request from A. When A receives a message
from the server, it decrypts it, retrieves I A , and forwards fI B gKSB to B. It is
crucial that A be able to recognize when this action is appropriate. For example,
if a principal X plays the role of A and B in two concurrent runs of the protocol,
it should be able to recognize whether a message is an instance of Message 2 or
an instance of Message 3. If X mistakes an instance fI X gKSX of Message 3 for
an instance Message 2, then X would forward part of I X in clear.
As in this example, it is common for principals to deduce the sensitivity of
inputs from the expected kind of a message or other implicit information. Such
implicit information is often incorrect and often hard to analyze [AN96].
It is clearer to label explicitly each component of a message with a classifi-
cation, avoiding the dependence on implicit context. This labelling is important
only for messages on secret channels or under secret keys, as all other messages
can contain only public information.
Alternatively, we may adopt a standard format for all messages on secret
channels or under secret keys. The format should guarantee that there is a
standard way to attach classifications to parts of each message, avoiding again
the dependence on implicit context. In our rules, we adopt this scheme. Each
message on a secret channel has three components, the first of which has level
Secret , the second Any , and the third Public. Each message under a secret key
has those three components plus a confounder component, as discussed next.
Both of these schemes are implementations of the following principle:
Upon receipt of a message, it should be easy to decide which parts of
the contents are sensitive information, if any. This decision is least error-prone
when it does not depend on implicit context.
2.5 Confounders
As noted in the introduction, given a secret bit b and a secret key K, we cannot
simply publish b under K if we may also publish 0 or 1 under K: an attacker
could deduce the value of b by comparing ciphertexts. On the other hand, we can
create a fresh value n, then publish the concatenation of b and n under K. A value
such as n is sometimes called a confounder. The purpose of n is to guarantee that
the resulting ciphertext differs from all previous ones. The resulting ciphertext
will differ from all future ones too if each ciphertext includes a fresh confounder,
always in the same position. In general (unless one is concerned about encrypting
known plaintext), confounders need not be kept secret.
Confounders are not needed if encryption is history-dependent, so a different
transformation is applied to each message. In particular, confounders are not
needed when encryption is done with a stream cipher or with a block cipher in
an appropriate chaining mode. The remarks of this section are not intended to
apply to protocols built on such algorithms.
Independently of the choice of cipher, confounders are not needed in protocols
where all ciphertexts generated are guaranteed to be different. Unfortunately, it
is not always clear whether this guarantee is offered.
As an example, we consider a simple protocol where a principal A sends
a message M to a principal B under a shared key KAB . Before A sends the
message, B provides a challenge NB , fresh for each message; the challenge serves
as a proof of freshness, and protects against replays.
We may reason that all the ciphertexts from A are different, since each of them
includes a fresh challenge. However, this reasoning is incorrect. An attacker C
could provide a challenge instead of B, and A would reply without noticing that
the challenge did not come from B. The attacker may pick the same challenge NC
twice in a row. In that case, the ciphertexts with which A responds, fM;NC gKAB
and are identical if and only if the cleartexts M and M 0 are
identical. Thus, C can find out whether A is sending two identical cleartexts, even
without knowing the key KAB . Thus, the protocol is leaking some information.
In order to prevent this small leak, A should create a confounder NA for each
encryption. The modified protocol is:
This protocol is analyzed in [AG97c], where it is proved that the protocol guarantees
the secrecy of M .
It is prudent to adopt the following principle:
If each encrypted message of a protocol includes a freshly generated
confounder in a standard position, then the protocol will not generate
the same ciphertext more than once. Confounders should be used unless
it is obvious that they are not needed.
2.6 Implicit Flows
A system may reveal something about one of its parameters even if this parameter
never appears explicitly in a message. For example, the system may send
different cleartext messages depending on the value of the parameter. An attacker
could deduce something about the parameter from patterns of communication.
Such a leak of information is sometimes called an implicit flow.
It is of course important to restrict implicit flows. For this purpose, we may
forbid any comparison that involves pieces of sensitive data and that could be
followed by actions that would reveal the result of the comparison.
It is also important not to push this restriction to uncomfortable extremes.
Many protocols exhibit implicit flows in some form, usually without severe undesirable
effects. As an example, we consider again this protocol:
Message
Message 3 A
An attacker can send an arbitrary message to A, instead of Message 2. On receipt
of this message, A performs a test that involves the secret KSA , and branches
on the result of this test. The reaction of A depends visibly on whether the
message is a ciphertext under KSA . One could regard this as an implicit flow,
but perhaps one of little importance because the chance that an independently
created message will be under KSA is negligible. We can allow this implicit flow;
it is harmless in our model.
Our (tentative) policy on implicit flows is summarized in the following principle
Implicit flows of information should be prevented, except perhaps when
the likelihood of the implicit flow is no greater than the likelihood that
an attacker will guess the information.
2.7 Further Principles
The discussion of this section is not comprehensive. In practice, several additional
warnings and techniques are important. A few of these are:
- It is often hard not to leak the size of a secret. The use of padding in encrypted
messages can help in this respect.
- It is prudent to minimize the benefit that an attacker may derive from discovering
any one secret. In particular, the same key should not be used to
protect a great volume of sensitive traffic, because then even a brute-force
attack on the key may be profitable.
- Weak secrets, such as passwords, should be protected from brute-force attacks
(see for example [Sch96]).
Undoubtedly there are more. However, the discussion of this section suffices as
background for our rules and theorems.
3 The Untyped, Polyadic Spi Calculus
This section defines the version of the spi calculus that serves as the setting for
our formal work. The main novelty with respect to the earlier versions of the spi
calculus is the introduction of polyadic forms: each input, output, encryption,
and decryption operation applies to an arbitrary number of pieces of data, rather
than to a single piece. This novelty is important for the typing rules of the next
section. However, this novelty is not substantial, as in particular it does not
affect the expressiveness of the spi calculus.
Therefore, our presentation is mostly a review. Most of this material is derived
from the earlier presentations of the spi calculus [AG97b]; it includes ideas
common in the pi-calculus literature.
3.1
We assume an infinite set of names and an infinite set of variables. We let m,
range over names, and let w, x, y, and z range over variables. We
for the outcome of replacing each free occurrence of x in P with
M , and identify expressions up to renaming of bound variables and names.
The set of terms is defined by the grammar:
name
suc(M) successor
x variable
The set of processes is defined by the grammar:
composition
(-n)P restriction
replication
[M is N ] P match
let in P pair splitting
case M of case
case L of fx
(The name n and the variables x, y, x 1 , . , x k are bound in these processes.)
Most of these constructs should be familiar from earlier process algebras;
see [AG97a] for a review, and see below for an operational semantics. The informal
semantics of some of the constructs is as follows.
- The term fM represents the ciphertext obtained by encrypting
under the key N .
- The process case L of fx attempts to decrypt L with
the if L has the form fM then the process behaves
as and otherwise the process is stuck.
- The process let (x; in P behaves as P [N=x][L=y] if M is a pair
(N; L), and it is stuck if M is not a pair.
- The process case M of is 0, as Q[N=x]
if M is suc(N) for some N , and otherwise is stuck.
- The process [M is N ] P behaves as P if M and N are equal, and otherwise
is stuck.
- The process MhN on M and then behaves
as P ; the output happens only if M is a name and there is another process
ready to receive k inputs on M . We use MhN as an abbreviation for
is a process that may output
on M and then stop.
- The process M(x is ready to receive k inputs N
and then to behave as P [N 1
- The process !P behaves as infinitely many copies of P running in parallel.
- The process (-n)P makes a new name n and then behaves as P .
The polyadic notations (input, output, encryption, decryption for k 6= 1)
are not necessary for expressiveness. They are definable from the corresponding
unary notations; for example, we can set:
case L of fx
case L of fygN in
let
where variable y is fresh. However, as mentioned above, the polyadic constructs
are useful for typing. They also introduce a typing difficulty, when arities do
not match, as in case fM in P . This typing difficulty
could be overcome with an appropriate system of sorts.
In addition to the polyadic notations, we use the following standard abbreviations
for any k - 2:
let
let
where variable y is fresh.
We write fn(M) and fn(P ) for the sets of names free in term M and process
respectively, and write fv (M) and fv (P ) for the sets of variables free in M
and P respectively. A term or process is closed if it has no free variables.
3.2 Commitment
As an operational semantics, we rely on a commitment relation. The definition of
commitment depends on some new syntactic forms, abstractions and concretions.
An abstraction is an expression of the form
x k are bound variables, and P is a process. A concretion is an expression of the
are terms, P
is a process, and the names m 1 , . , m l are bound in M 1 , . , M k and P . An
agent is an abstraction, a process, or a concretion. We use the metavariables A
and B to stand for arbitrary agents.
We extend the restriction and composition operators to abstractions:
assuming that x fv(R), and to concretions:
(-m)(-~n)h ~
assuming that m 62 f~ng and f~ng " We define the dual composition
A j R symmetrically. (The definition of (-m)(-~n)h ~
MiQ is slightly different from
the original one [AG97b]. The change is crucial for Lemma 9, below.)
If F is the abstraction and C is the concretion (-n
we define the processes F@C and
C@F as follows:
The reduction relation ? is the least relation on closed processes that satisfies
the following axioms:
(Red Repl)
(Red Match)
(Red Let)
let
(Red Zero)
case 0 of
(Red Suc)
case suc(M) of
(Red Decrypt)
case
A barb is a name m (representing input) or a co-name m (representing out-
put). An action is a barb or the distinguished silent action - . The commitment
relation is written P ff
\Gamma! A, where P is a closed process, ff is an action, and A
is a closed agent. It is defined by the rules:
(Comm Out)
(Comm In)
\Gamma!
(Comm Inter 1)
\Gamma! C
\Gamma! F@C
(Comm Inter 2)
\Gamma!
\Gamma! F
\Gamma! C@F
(Comm Par 1)
\Gamma! A
\Gamma! A j Q
(Comm Par 2)
\Gamma! A
\Gamma!
(Comm Res)
\Gamma! A
(-m)P ff
\Gamma! (-m)A
(Comm Red)
\Gamma! A
\Gamma! A
3.3 Testing Equivalence
A test is a pair (R; fi) consisting of a closed process R and a barb fi. We say
that P passes a test (R; fi) if and only if
\Gamma! Qn fi
\Gamma! A
for some n - 0, some processes Q 1 , . , Qn , and some agent A. We obtain a
testing preorder v and a testing equivalence ' on closed processes:
for any test (R; fi), if P passes (R; fi) then Q passes (R; fi)
A strict barbed simulation is a binary relation S on closed processes such that
(1) for every barb fi, if P fi
\Gamma! A then Q fi
\Gamma! B for some B,
\Gamma! P 0 then there exists Q 0 such that Q -
(These requirements are somewhat stronger than those for barbed simulations
[AG97b].) A strict barbed bisimulation is a relation S such that both S and S \Gamma1
are strict barbed simulations.
The following lemma provides a method for proving testing equivalence:
Lemma 1. If for every closed process R there exists a strict barbed bisimulation
S such that (P j R) S (Q j R), then P ' Q.
Proof. The lemma is a consequence of earlier results [AG97b], but we give a
simple direct proof, since there is one. We argue that if P passes a test (R; fi)
then so does Q, assuming that there exists a strict barbed bisimulation S such
that (Symmetrically, if Q passes a test (R; fi) then so does P .)
If P passes (R; fi), then
\Gamma! Rn fi
\Gamma! A
for some agent A and processes R 1 , . , Rn , for some n - 0. By the definition
of strict barbed simulation, there exist R 0
n and A 0 such that R i
\Gamma! A 0
Therefore, Q passes (R; fi). 2
4 The Typing System
This section describes our rules for controlling information flow in the spi cal-
culus. They are based on the ideas of Section 2. While there are several ways of
formalizing those ideas, we embody them in a typing system for the spi calculus.
This typing system is simple enough that it could be enforced statically. Our
main results for this typing system are in Section 5.
The typing system gives rules for three kinds of assertions (or judgments):
means that environment E is well-formed.
means that term M is of class T in E.
means that process P typechecks in E.
4.1 Environments
An environment is a list of distinct names and variables with associated levels.
In addition, each name n has an associated term, of the form fM
for some k - 0. Intuitively, this association means that the name n may be used
as a confounder only in the term fM We
x with level T , and level T and term
We as a shorthand for arbitrarily,
when n is not needed as a confounder.
The set of names and variables declared in an environment E are its domain;
we write it dom(E).
The rules for environments are:
(Environment Empty)
well-formed
(Environment Variable)
well-formed
(Environment Name)
well-formed
Collectively, these rules enable us to form environments as lists of variables
and names with associated levels, and in addition with terms attached to the
names. The empty list is written ; in the rule (Environment Empty).
In the rule (Environment Name), the hypotheses
are included so that, if any variable occurs in M 1 , . , M k , then it is declared
in E. This restriction is important because, without proper care, a variable
could be instantiated in two different ways, so that the same confounder could
be used for two different messages; this double use would defeat the purpose of
confounders.
4.2 Terms
The rules for terms are:
(Level Subsumption)
(Level Variable)
(Level Name)
(Level Zero)
well-formed
(Level Successor)
(Level Pair)
(Level Encryption Public)
(Level Encryption Secret)
The rule (Level Subsumption) says that a term of level Public or Secret has
level Any as well.
The rules (Level Variable) and (Level Name) enable us to extract the levels
of names and variables from an environment.
The rules (Level Zero) says that 0 is of level Public. The rule (Level Successor)
says that adding one preserves the level of a piece of data. Therefore, the terms
0, suc(0), suc(suc(0)), . are all of level Public. However, a term of the form
suc(x) may be of level Secret .
The rule (Level Pair) says that the level of a pair is the level of its components.
Both components must have the same level; when we pair a term of level Public
and one of level Secret , we need to regard them both as having level Any . Thus,
the rule (Level Pair) loses a little bit of typing information; it would be interesting
to explore a richer, more "structural" typing system that would avoid this loss.
The rule (Level Encryption Public) says that k pieces of data of the same level
T can be encrypted under a key of level Public, with a resulting ciphertext of level
T . The rule (Level Encryption Secret) imposes more restrictions for encryption
under keys of level Secret , because the resulting ciphertext is declassified to level
Public. These restrictions enforce a particular format for the levels of the contents
and the use of a confounder, as explained in Section 2. One could relax this rule
somewhat, considering also the case where the resulting ciphertext is given a
level other than Public; the present rule strikes a reasonable balance between
simplicity and flexibility. Finally, note that there is no rule for encryption for
the case where N is a term of level Any . If N is a term of level Any , and it is
not known whether it is of level Public or Secret , then N cannot be used as a
key.
4.3 Processes
The rules for processes are:
(Level Output Public)
(Level Output Secret)
(Level Input Public)
(Level Input Secret)
(Level Nil)
well-formed
(Level Parallel)
(Level Replication)
(Level Restriction)
(Level Match) for T
(Level Pair Splitting) for T 2 fPublic; Secretg
(Level Integer Case) for T 2 fPublic; Secretg
case M of
(Level Decryption Public) for T 2 fPublic; Secretg
case L of fx
(Level Decryption Secret) for T 2 fPublic; Secretg
case L of fx
There are two rules for output and two rules for input. The rule (Level
Output Public) says that terms of level Public may be sent on a channel of
level Public. The rule (Level Output Secret) says that terms of all levels may be
sent on a channel of level Secret-provided this is done according to the format
described in Section 2. The two rules for input, (Level Input Public) and (Level
Input match these rules for output. In (Level Input Public) all inputs
are assumed to be of level Public, while in (Level Input Secret) the levels of the
inputs are deduced from their position, as allowed by the format of messages on
channels of level Secret . If M is a term of level Any , and it is not known whether
it is of level Public or Secret , then M cannot be used as a channel.
The rules for nil, for parallel composition, for replication, and for restriction
are routine. In the rule (Level Restriction), the name n being bound is associated
with an arbitrary term L for which it can be used as a confounder.
The rule (Level Match) enables us to compare any two terms of levels Public
or Secret . Terms of level Any are excluded in order to prevent implicit flows, as
discussed in Section 2. It may be a little surprising that terms of level Secret are
allowed in this rule, because this may seem to permit an implicit flow. However,
the generality of the rule (Level Match) does not present an obstacle to our
results.
The rule (Level Pair Splitting) enables us to try to break a term of level
Public or Secret into two components, each assumed to be of the same level as
the original term. The case where the original term is known only to be of level
Any is disallowed; if it were allowed, this rule would permit leaking whether the
term is in fact a pair.
Similarly, the rule (Level Integer Case) enables to examine whether a term
is 0 or a successor term, and to branch on the result. In the successor case,
the variable x represents the predecessor of the term being examined, which
is assumed to be of the same level as the term. As in (Level Pair Splitting),
the term should not be of level Any , but it may be of level Secret . If names
were numbers, then repeated applications of the rule (Level Integer Case) would
enable us to publish a secret key in unary. Formally, this leak cannot happen
because names are not numbers. The practical meaning of this small formal
miracle is debatable; it may suggest that a model more concrete than the spi
calculus would be worth investigating.
Finally, there are two rules for decryption. The rule (Level Decryption Public)
handles the case where the decryption key is of level Public, while the rule (Level
Decryption Secret) handles the case where the decryption key is of level Secret .
These rules are analogous to the corresponding rules for input. There is no rule
for the case of a key of level Any .
5 What Typing Guarantees
The goal of this section is to prove that if a process typechecks then it does not
leak the values of parameters of level Any . More precisely, our main theorem
says that if only variables of level Any and only names of level Public are in the
domain of the environment E, if oe and oe 0 are two substitutions of values for the
variables in E, and if P typechecks (that is, can be proved), then P oe
and P oe 0 are testing equivalent. This conclusion means that an observer cannot
distinguish P oe and P oe 0 , so it cannot detect the difference in the values for the
variables.
In order to prove this result, we develop a number of propositions and lemmas
that analyze the typing system and characterize the possible behaviors of
processes that typecheck.
5.1 Typing for Concretions and Abstractions
The first step in our proofs is to extend the typing rules to concretions and
abstractions:
(Level Concretion Public)
(Level Concretion Secret)
(Level Abstraction Public)
(Level Abstraction Secret)
These rules should be reminiscent of corresponding rules for output and input.
5.2 Auxiliary Propositions
Next we obtain several auxiliary results. The first of them, Proposition 2, is a
formal counterpart of the discussion of Section 2.1: this proposition shows that,
given a suitable environment, a closed process P that we may construe as an
attacker always typechecks.
Proposition 2. Assume that ' E well-formed, that dom(E) does not contain
any variables, and that all the names in dom(E) are of level Public.
If M is a closed term and fn(M) ' dom(E), then
If P is a closed process and fn(P
Proof. We prove a more general property, allowing variables to occur in E, M ,
and P . We consider an environment E such that ' E well-formed, where the
levels of names and variables are all Public. We prove that:
If M is a term with fn(M) [ fv(M) ' dom(E), then
If P is a process with
The former of these facts is obtained by a direct induction on the structure of
(using (Level Encryption Public) for terms of the form fM
latter of these facts is then obtained by a direct induction on the structure of
(using (Level Output Public), (Level Input Public), and (Level Decryption
Proposition 3 is fairly standard; it says that anything that can be proved
in a given environment can also be proved after adding assumptions to the
environment.
Proposition 3. Assume that '
Proof. This property is proved by induction on the derivations of
and
Proposition 4 enables to reorder an environment, moving the declaration of
a name past the declarations of some variables.
Proposition 4. Let E 1 be
Proof. The proof is by induction on the derivations of '
Note that the converse of this proposition cannot be true because, under the
hypotheses of the converse, L could contain free occurrences of x 1 , . , x k .
The next proposition says that the levels Secret and Public are mutually
exclusive.
Proposition 5. If then it is not the case that
Proof. We assume that both hold, and derive
a contradiction, by induction on the size of the two derivations of
Public. The only interesting case is that where M has the form
. The term fM could have levels Secret and Public
at once only if N did as well (and by application of the two rules for encryption);
the induction hypothesis yields the expected contradiction. 2
The remaining auxiliary results all concern substitutions. The first of them
is a standard substitution lemma.
Proposition 6. Assume that E
well-formed then '
Proof. The proof is by a joint induction on the derivations of the judgments
In
the case of the rule (Level Encryption Secret), it is important to note that if the
confounder n appears in E then x cannot occur in the term being formed. 2
In general, a substitution is a partial function from the set of variables to the
set of terms. We write dom(oe) for the domain of the substitution oe.
Proposition 7. Given an environment E, suppose that only variables of level
Any in E are in dom(oe), and that either E
- If Loe is a variable then L is the same variable.
- If Loe is a name then L is the same name.
- If Loe is of the form (M; N) then L is of the same form.
- If Loe is 0 then L is 0.
- If Loe is of the form suc(M) then L is of the same form.
- If Loe is of the form fM then L is of the same form.
Proof. This follows from the fact that L cannot be a variable in dom(oe), since
this domain consists of variables of level Any . 2
Proposition 8. Given an environment E, suppose that only variables of level
Any in E are in dom(oe). Suppose further that
Proof. The proof is by induction on the derivation of . The rule (Level
Subsumption) can be applied only trivially as the last rule of this derivation
since T 2 fPublic; Secretg. There is one case for each of the remaining rules for
typechecking terms.
- If the last rule of the derivation of
M is a variable, but not one in dom(oe) since dom(oe) consists of variables
of level Any , so M Therefore, M oe = N oe implies that
Proposition 7.
- If the last rule of the derivation of (Level Name) or (Level
then M is a name or 0, so M Therefore, M oe = N oe implies
that by Proposition 7.
- The cases of (Level Successor) and (Level Pair) are by easy applications of
Proposition 7 and the induction hypothesis.
- The two remaining cases are for M of the form fM We have
depending on whether the derivation
of finishes with an application of (Level Encryption Public) or
an application of (Level Encryption Secret). In both cases, Proposition 7
implies that N has the form fN
, with
depending on whether the derivation of
finishes with an application of (Level Encryption Public) or an
application of (Level Encryption Secret). By induction hypothesis, we obtain
. By Proposition 5, the derivations of
finish with applications of the same rule.
ffl If this rule is (Level Encryption Public), then we have
. ,
to the applications of the rule. Since M oe = N oe, we have M 1
oe. By induction hypothesis, we obtain M
ffl If this rule is (Level Encryption Secret), then 4 is a name
in E for some level T 0 , and N 4 is a name n with
a name may be declared in an environment at most once, we conclude
that .5.3 Lemmas on Commitment and Simulation
The main lemma of this section relates the typing system with the commitment
relation. We write E ' oe when, for every x 2 dom(E), oe(x) is a closed term
such that fn(oe(x)) ' dom(E).
Lemma 9. Assume that:
(3) all variables in dom(E) are of level Any.
Then:
then there is a process Q such that
then there is a process Q such that
\Gamma! A 0 then either there is a
concretion A such that
OkCSecret (depending on whether
\Gamma! A 0 then either there is an
abstraction A such that
OkASecret (depending on whether
Proof. The argument for this lemma is quite long, but not surprising, given
the auxiliary propositions. It consists of one case for each of the axioms for the
reduction relation and one case for each of the rules for the commitment relation.
We therefore omit the details of this argument. 2
Lemma 10. Given an environment E, suppose that all the variables in dom(E)
are of level Any. Suppose further that E ' oe and E ' oe 0 . Then the relation
is a strict barbed bisimulation.
Proof. First we consider any commitment P oe ff
\Gamma! A 0 , where ff is a barb. By
Lemma 9, there is an agent A such that P oe 0 ff
\Gamma! Aoe 0 .
Next, we consider any commitment P oe -
\Gamma! Q 0 . By Lemma 9, there is a
process Q such that
\Gamma! Qoe 0 . Thus, any - step
of P oe may be matched by P oe 0 .
Therefore, the relation f(P oe; P oe 0 is a strict barbed simula-
tion. By symmetry, it is a strict barbed bisimulation. 2
5.4 Main Theorem
Finally, we obtain the theorem described at the start of Section 5:
Theorem 11. Given an environment E, suppose that only variables of level
Any and only names of level Public are in dom(E). Suppose further that E ' oe
Proof. According to Lemma 1, it suffices to show that for every closed process Q
there exists a strict barbed bisimulation that relates P oe j Q and P oe 0 j Q. Since
We construct an extension of E with the names that appear free in Q but
are not in dom(E). For each such name n, we us call
the resulting environment. By Propositions 2 and 3, we obtain E
Also by Proposition 3, we obtain E ' Combining these two results,
rule (Level Parallel) yields E ' (P
Finally, Lemma 10 yields the desired result. 2
Note that this theorem would not hold if P could have free occurrences of names
of level Secret . Such occurrences are ruled out by the hypotheses that only names
of level Public are in E and that
6 Examples
In order to illustrate the use of our typing rules, we consider as examples two
protocols for key exchange and secure communication.
In both cases, we can typecheck the protocols. As corollaries, we obtain that
the secrecy of certain messages is protected. These corollaries should not be
surprising. However, they would be much harder to prove in the spi calculus
without the rules developed in this paper, from first principles.
Analogues of our corollaries might be provable in other formal systems. Sur-
prisingly, there do not seem to be any formal proofs of this sort in the literature.
In some methods, one may be able to show that the messages in question are
not among the terms that the attacker obtains when the protocol runs. However,
this result is only an approximation to the corollaries, as it does not rule out
that the attacker could at least deduce whether the messages are even numbers
or odd numbers, for example. The corollaries exclude this possibility.
Analogues of our corollaries can perhaps be established in informal but rigorous
models (see for example [BR95]). These models are rather accurate, as in
particular they can take into account issues of probability and complexity. Unfor-
tunately, proofs in these models remain much more difficult than typechecking.
6.1 A First Example
The first protocol is similar in structure to the Wide Mouthed Frog protocol
[BAN89]. Informally, this protocol is:
on c S
Message
Message 3 A
Message
Message
Message 7 A
A gKAB on c B
In this protocol, A and B are two clients and S is a server. The channels c S , c A ,
and dB are public. The keys KAS and KSB are secret keys for communication
with the server, while KAB is a new key for communication from A to B. The
message M is intended to be secret. Both NS and NB are nonces (used to prove
timeliness); is an arbitrary message of appropriate level (not necessarily the
same for all occurrences of ); and CA , C 0
A , and CS are confounders. In Messages
1 and 2, A requests and receives a nonce challenge from S. In Messages 4 and
requests and receives a nonce challenge from B. In Message 3, A provides
the key KAB to S, which passes it on to B in Message 6. In Message 7, A uses
KAB for sending M . On receipt of this message from A, the recipient B outputs
the names of A and B on a public channel dB , in Message 8. It is not important
to specify who receives this last message, which we include only in order to
illustrate that B is allowed to react.
We can express this protocol in the spi calculus, much as in the earlier work
on the spi calculus [AG97a] but with attention to the requirements of typing.
The definition is for a given set of messages M 1 , . , Mm with source addresses
These addresses
are natural numbers in the range 1::n; they indicate who plays the role of A
and who plays the role of B in each run of the protocol. In addition, S is an
address. For each address i, there are channels c i and d i . We write i for the term
representing i, and simply write S for the term representing S. In the definition
of Send i;j , the variable z corresponds to the message M ; we write Send i;j (M)
for Send i;j [M=z].
nonce
nonce
A gK i)
i21::n [x A is i] (-NS )(c i hNS
case x cipher of fx
let (y A ; z A ; xB ; x nonce
j21::n [y A is i] [z A is i] [x B is j] [x nonce is NS
nonce
nonce
case y cipher of fx
let nonce
i21::n [x S is S] [x A is i] [x B is j] [y nonce is NB
case z cipher of fz s ; z a ; z
x key
in
Sys \Delta
The next proposition says that this protocol typechecks.
Proposition 12. Let E be the environment
be any fixed numbers in 1::n, for k 2 1::m. Let M k be z k , for
Proof. In order to indicate how the process Sys typechecks, we annotate its
bound names and variables with their levels, as they are introduced; we also
annotate confounders with the terms in which they are used. For
nonce
A gK i)
i21::n [x A is
case x cipher of fx key
nonce
j21::n [y A is i] [z A is i] [x B is j] [x nonce is NS
nonce
case y cipher of fx
let
i21::n [x S is S] [x A is i] [x B is j] [y nonce is NB
case z cipher of fz s
in
Finally, in the given environment E, we set:
Sys \Delta
writing
As a consequence of the typechecking, we obtain that the protocol does not
reveal the message M from A. This conclusion is stated in the following corollary,
where for simplicity we restrict attention to the case where M is a numeral.
numeral is one of the terms 0, suc(0), suc(suc(0)), .)
Corollary 13. Let i k and j k be any fixed numbers in 1::n, for k 2 1::m. Let Sys1
and Sys2 be two versions of Sys where the terms M k are arbitrary numerals, for
Proof. This is an immediate consequence of Proposition 12 and Theorem 11. 2
6.2 A Second Example
In the second example, the principal A transmits the secret message M under a
generated by the server S. This example is interesting because it brings up
an issue of trust: A trusts S to provide a key appropriate for the transmission
of M .
Message
Message
S gKSB on c B
Message 5 A
Again, A and B are two clients and S is a server, and c S , c A , c B , and dB are
public channels. The keys KSA and KSB are secret keys for communication from
the server to A and B, while KAB is the new key for communication between A
and B. Both NA and NB are nonces, and CS , C 0
S , and CA are confounders.
We write this example in the spi calculus in much the same style as the first
example. The definition is for a given set of messages M 1 , . , Mm with source
case x cipher of fx
let
[x A is i] [y B is j] [x nonce is NA
nonce
i21::n [x A is i]
nonce
nonce
i21::n [x A is i]
case y cipher of fx
let nonce
[x A is i] [y B is j] [y nonce is NB
case z cipher of fz s ; z a ; z
in
Sys \Delta
The next proposition and corollary are the analogues of Proposition 12 and
Corollary 13, respectively. We omit their proofs.
Proposition 14. Let E be the environment
be any fixed numbers in 1::n, for k 2 1::m. Let M k be z k , for
Corollary 15. Let i k and j k be any fixed numbers in 1::n, for k 2 1::m. Let Sys1
and Sys2 be two versions of Sys where the terms M k are arbitrary numerals, for
Conclusions
Perhaps in part because of advances in programming languages, the idea of static
checking of security properties seems to be reviving. The Java bytecode verifier
is a recent static checker with security objectives [LY96]. In the last couple of
years, more sophisticated security checks have been based on self-certification
and on information-flow techniques (see for example [Nec97,VSI96]).
This work can be seen as part of that revival. It develops a method for static
checking of secrecy properties of programs written in a minimal but expressive
programming language, the spi calculus. These programs can be concurrent, and
can use cryptography. The method is embodied in a set of typing rules.
The principles and rules developed in this paper are neither necessary nor
sufficient for security. They are not necessary because, like most practical static
typechecking disciplines, ours is incomplete. They are not sufficient because they
ignore all security issues other than secrecy, and because they do not account for
how to implement the spi calculus while preserving secrecy properties. However,
these principles and rules provide some useful guidelines. Furthermore, the rules
are tractable and precise; so we have been able to study them in detail and to
prove secrecy properties, establishing the correctness of the informal principles
within a formal model.
Acknowledgments
Butler Lampson suggested studying authentication protocols through classification
techniques, several years ago. That suggestion was the starting point for
this work.
This work took place in the context of collaboration with Andrew Gordon on
the spi calculus, so the themes and techniques of this paper owe much to him;
Andrew Gordon also commented on a draft of this paper.
Conversations with Mike Burrows, Steve Kent, and Ted Wobber were helpful
during the writing of this paper.
--R
A calculus for cryptographic protocols: The spi calculus.
A calculus for cryptographic protocols: The spi calculus.
Reasoning about cryptographic protocols in the spi calculus.
Prudent engineering practice for cryptographic protocols.
A logic of authentication.
Testing equivalence for mobile processes.
Provably secure session key distribution: The three party case.
Cryptography and Data security.
Data encryption standard.
Testing equivalences for processes.
Building a Secure Computer System.
The Java Virtual Machine Specification.
The polyadic
A calculus of mobile processes
Using encryption for authentication in large networks of computers.
Typing and subtyping for mobile processes.
Applied Cryptography: Protocols
A sound type system for secure flow analysis.
--TR
Building a secure computer system
A calculus of mobile processes, I
A calculus of mobile processes, II
A lesson on authentication protocol design
Testing equivalence for mobile processes
Applied cryptography (2nd ed.)
Provably secure session key distribution
Prudent Engineering Practice for Cryptographic Protocols
Linearity and the pi-calculus
Proof-carrying code
A calculus for cryptographic protocols
A decentralized model for information flow control
From system F to typed assembly language
Secure information flow in a multi-threaded imperative language
The SLam calculus
A typed language for distributed mobile processes (extended abstract)
The Compositional Security Checker
A probabilistic poly-time framework for protocol analysis
A calculus for cryptographic protocols
A sound type system for secure flow analysis
The inductive approach to verifying cryptographic protocols
Using encryption for authentication in large networks of computers
Cryptography and Data Security
Handbook of Applied Cryptography
Java Virtual Machine Specification
A bisimulation method for cryptographic protocols
Protection in Programming-Language Translations
Reasoning about Cryptographic Protocols in the Spi Calculus
Control Flow Analysis for the pi-calculus
The Polyadic Pi-calculus (Abstract)
Robustness Principles for Public Key Protocols
Mobile Ambients
Secure Implementation of Channel Abstractions
Strategies against Replay Attacks
Proving Trust in Systems of 2nd-Order Processes
Limitations on Design Principles for Public Key Protocols
--CTR
Martn Abadi , Bruno Blanchet, Analyzing security protocols with secrecy types and logic programs, ACM SIGPLAN Notices, v.37 n.1, p.33-44, Jan. 2002
Riccardo Focardi , Sabina Rossi, Information flow security in dynamic contexts, Journal of Computer Security, v.14 n.1, p.65-110, January 2006
Shahabuddin Muhammad , Zeeshan Furqan , Ratan K. Guha, Understanding the intruder through attacks on cryptographic protocols, Proceedings of the 44th annual southeast regional conference, March 10-12, 2006, Melbourne, Florida
Antonio Brogi , Carlos Canal , Ernesto Pimentel, Component adaptation through flexible subservicing, Science of Computer Programming, v.63 n.1, p.39-56, November 2006
Roberto Zunino , Pierpaolo Degano, Weakening the perfect encryption assumption in Dolev-Yao adversaries, Theoretical Computer Science, v.340 n.1, p.154-178, 13 June 2005
Michael Backes , Peeter Laud, Computationally sound secrecy proofs by mechanized flow analysis, Proceedings of the 13th ACM conference on Computer and communications security, October 30-November 03, 2006, Alexandria, Virginia, USA
Pankaj Kakkar , Carl A. Gunter , Martn Abadi, Reasoning about secrecy for active networks, Journal of Computer Security, v.11 n.2, p.245-287, May
Luca Cardelli , Giorgio Ghelli , Andrew D. Gordon, Types for the ambient calculus, Information and Computation, v.177 n.2, p.160-194, 15 September 2002
C. Bodei , P. Degano , F. Nielson , H. Riis Nielson, Flow logic for Dolev-Yao secrecy in cryptographic processes, Future Generation Computer Systems, v.18 n.6, p.747-756, May 2002
Grard Boudol , Ilaria Castellani, Noninterference for concurrent programs and thread systems, Theoretical Computer Science, v.281 n.1-2, p.109-130, June 3 2002
Fabio Martinelli, Analysis of security protocols as open systems, Theoretical Computer Science, v.290 n.1, p.1057-1106, 1 January
Eijiro Sumii , Benjamin C. Pierce, Logical relation for encryption, Journal of Computer Security, v.11 n.4, p.521-554, 01/01/2004
Luca Cardelli , Giorgio Ghelli , Andrew D. Gordon, Secrecy and group creation, Information and Computation, v.196 n.2, p.127-155, January 29, 2005
dependent types for higher-order mobile processes, ACM SIGPLAN Notices, v.39 n.1, p.147-160, January 2004
Benjamin C. Pierce, Type systems, Programming methodology, Springer-Verlag New York, Inc., New York, NY,
Andrew D. Gordon , Alan Jeffrey, Authenticity by typing for security protocols, Journal of Computer Security, v.11 n.4, p.451-519, 01/01/2004
Phan Minh Dung , Phan Minh Thang, Stepwise development of security protocols: a speech act-oriented approach, Proceedings of the 2004 ACM workshop on Formal methods in security engineering, October 29-29, 2004, Washington DC, USA
Chiara Bodei , Pierpaolo Degano , Riccardo Focardi , Corrado Priami, Primitives for authentication in process algebras, Theoretical Computer Science, v.283 n.2, p.271-304, June 14, 2002
Martn Abadi , Bruno Blanchet, Analyzing security protocols with secrecy types and logic programs, Journal of the ACM (JACM), v.52 n.1, p.102-146, January 2005
Peeter Laud, Secrecy types for a simulatable cryptographic library, Proceedings of the 12th ACM conference on Computer and communications security, November 07-11, 2005, Alexandria, VA, USA
Martn Abadi , Bruno Blanchet, Secrecy types for asymmetric communication, Theoretical Computer Science, v.298 n.3, p.387-415, 11 April
Annalisa Bossi , Damiano Macedonio , Carla Piazza , Sabina Rossi, Information flow in secure contexts, Journal of Computer Security, v.13 n.3, p.391-422, May 2005
Chiara Bodei , Mikael Buchholtz , Pierpaolo Degano , Flemming Nielson , Hanne Riis Nielson, Static validation of security protocols, Journal of Computer Security, v.13 n.3, p.347-390, May 2005
Christian Haack , Alan Jeffrey, Pattern-matching spi-calculus, Information and Computation, v.204 n.8, p.1195-1263, August 2006
David Monniaux, Abstracting cryptographic protocols with tree automata, Science of Computer Programming, v.47 n.2-3, p.177-202, May
Avik Chaudhuri , Martn Abadi, Formal security analysis of basic network-attached storage, Proceedings of the 2005 ACM workshop on Formal methods in security engineering, November 11-11, 2005, Fairfax, VA, USA
C. Bodei , P. Degano , R. Focardi , C. Priami, Authentication primitives for secure protocol specifications, Future Generation Computer Systems, v.21 n.5, p.645-653, May 2005
Giampaolo Bella , Stefano Bistarelli, Soft constraint programming to analysing security protocols, Theory and Practice of Logic Programming, v.4 n.5-6, p.545-572, September 2004
Martn Abadi , Cdric Fournet, Mobile values, new names, and secure communication, ACM SIGPLAN Notices, v.36 n.3, p.104-115, March 2001
Gilles Barthe , Tamara Rezk , Amitabh Basu, Security types preserving compilation, Computer Languages, Systems and Structures, v.33 n.2, p.35-59, July, 2007 | cryptographic protocols;process calculi;secrecy properties |
324345 | ARMADA Middleware and Communication Services. | Real-time embedded systems have evolved during the past several decades from small custom-designed digital hardware to large distributed processing systems. As these systems become more complex, their interoperability, evolvability and cost-effectiveness requirements motivate the use of commercial-off-the-shelf components. This raises the challenge of constructing dependable and predictable real-time services for application developers on top of the inexpensive hardware and software components which has minimal support for timeliness and dependability guarantees. We are addressing this challenge in the ARMADA project.ARMADA is set of communication and middleware services that provide support for fault-tolerance and end-to-end guarantees for embedded real-time distributed applications. Since real-time performance of such applications depends heavily on the communication subsystem, the first thrust of the project is to develop a predictable communication service and architecture to ensure QoS-sensitive message delivery. Fault-tolerance is of paramount importance to embedded safety-critical systems. In its second thrust, ARMADA aims to offload the complexity of developing fault-tolerant applications from the application programmer by focusing on a collection of modular, composable middleware for fault-tolerant group communication and replication under timing constraints. Finally, we develop tools for testing and validating the behavior of our services. We give an overview of the ARMADA project, describing the architecture and presenting its implementation status. | Introduction
ARMADA is a collaborative project between the Real-Time Computing Laboratory (RTCL)
at the University of Michigan and the Honeywell Technology Center. The goal of the
project is to develop and demonstrate an integrated set of communication and middleware
services and tools necessary to realize embedded fault-tolerant and real-time services on
distributed, evolving computing platforms. These techniques and tools together compose
an environment of capabilities for designing, implementing, modifying, and integrating
real-time distributed systems. Key challenges addressed by the ARMADA project include:
timely delivery of services with end-to-end soft/hard real-time constraints; dependability
of services in the presence of hardware or software failures; scalability of computation
and communication resources; and exploitation of open systems and emerging standards
in operating systems and communication services.
ARMADA communication and middleware services are motivated by the requirements
of large embedded applications such as command and control, automated flight, shipboard
* This work is supported in part by a research grant from the Defense Advanced Research Projects Agency,
monitored by the U.S. Air Force Rome Laboratory under Grant F30602-95-1-0044.
API
APPLICATIONS
SERVICES
TOOLS
EVALUATION
CHANNELS
Microkernel
Figure
1. Overview of ARMADA Environment.
computing, and radar data processing. Traditionally, such embedded applications have
been constructed from special-purpose hardware and software. This approach results in
high production cost and poor interoperability making the system less evolvable and more
prone to local failures. A recent trend, therefore, has been to build embedded systems
using Commercial-Off-The-Shelf (COTS) components such as PC boards, Ethernet links,
and PC-based real-time operating systems. This makes it possible to take advantage of
available development tools, leverage on mass production costs, and make better use of
component interoperability. From a real-time application developer's point of view, the
approach creates the need for generic high-level software services that facilitate building
embedded distributed real-time applications on top of inexpensive widely available hard-
ware. Real-time operating systems typically implement elementary subsets of real-time
services. However, monolithically embedding higher-level support in an operating system
kernel is not advisable. Different applications have different real-time and fault-tolerance
requirements. Thus, catering to all possible requirement ranges in a single operating system
would neither be practical nor efficient. Instead, we believe that a composable set
of services should be developed of which only a subset may need to exist for any given
application. This philosophy advocates the use of a real-time microkernel equipped with
basic real-time support such as priority-based scheduling and real-time communication,
in addition to a reconfigurable set of composable middleware layered on top of the ker-
nel. Appropriate testing and validation tools should be independently developed to verify
required timeliness and fault-tolerance properties of the distributed middleware.
The ARMADA project is therefore divided into three complementary thrust areas: (i)
low-level real-time communication support, (ii) middleware services for group communication
and fault-tolerance, and (iii) dependability evaluation and validation tools. Figure 1
summarizes the structuring of the ARMADA environment.
The first thrust focused on the design and development of real-time communication
services for a microkernel. A generic architecture is introduced for designing the communication
subsystem on hosts so that predictability and QoS guarantees are maintained.
The architecture is independent of the particular communication service. It is illustrated
in this paper in the context of presenting the design of the real-time channel; a low-level
communication service that implements a simplex, ordered virtual connection between
two networked hosts that provides deterministic or statistical end-to-end delay guarantees
between a sender-receiver pair.
The second thrust of the project has focused on a collection of modular and composable
middleware services (or building blocks) for constructing embedded applications. A layered
open-architecture supports modular insertion of a new service or implementation as
requirements evolve over the life-span of a system. The ARMADA middleware services
include a suite of fault-tolerant group communication services with real-time guarantees,
called RTCAST, to support embedded applications with fault-tolerance and timeliness re-
quirements. RTCAST consists of a collection of middleware includinga group membership
service, a timed atomic multicast service, an admission control and schedulability module,
and a clock synchronization service. The ARMADA middleware services also include
a real-time primary-backup replication service, called RTPB, which ensures temporally
consistent replicated objects on redundant nodes.
The third thrust of the project is to build a toolset for validating and evaluating the
timeliness and fault-tolerance capabilities of the target system. Tools under development
include fault injectors at different levels (e.g. operating system, communication protocol,
and application), a synthetic real-time workload generator, and a dependability/performance
monitoring and visualization tool. The focus of the toolset research is on portability,
flexibility, and usability.
Figure
2 gives an overview of a prospective application to illustrate the utility of our
services for embedded real-time fault-tolerant systems. The application, developed at
Honeywell, is a subset of a command and control facility. Consider a radar installation
where a set of sensors are used to detect incoming threats (e.g., enemy planes or missiles in
a battle scenario); hypotheses are formed regarding the identity and positions of the threats,
and their flight trajectories are computed accordingly. These trajectories are extrapolated
into the future and deadlines are imposed to intercept them. The time intervals during
which the estimated threat trajectories are reachable from various ground defense bases
are estimated; and appropriate resources (weapons) are committed to handle the threats;
eventually, the weapons are released to intercept the threats.
The services required to support writing such applications come naturally from their operating
requirements. For example, for the anticipated system load, communication between
different system components (the different boxes in Figure 2) must occur in bounded time
to ensure a bounded end-to-end response from threat detection to weapon release. Our
real-time communication services compute and enforce predictable deterministic bounds
on message delays given application traffic specification. Critical system components such
as hypothesis testing and threat identification have high dependability requirements which
are best met using active replication. For such components, RTCAST exports multicast and
membership primitives to facilitate fault detection, fault handling, and consistency management
of actively replicated tasks. Similarly, extrapolated trajectories of identified threats
Trajectory
Extrapolation
Plotting
Impact Time
Estimation
Computing
from Bases
Accessibility
Risk
Assessment
Masking
Surveillance
Intelligence
Route
Optimization
and Scheduling
Assignment
Weapon
Sensory Input
Hypothesis
Threat
Testing and
Compute
Positions
Weapon Base
Release
Weapon
Figure
2. A command and control application
represent critical system state. A backup of such state needs to be maintained continually
and updated to represent the current state within a tolerable consistency (or error) margin.
Our primary-backup replication service is implemented to meet such temporal consistency
requirements. Finally, our testing tools decrease development and debugging costs of the
distributed application.
The rest of this paper is organized as follows. Section 2 describes the general approach
for integrating ARMADA services into a microkernel framework. It also presents the
experimental testbed and implementation environment of this project. The subsequent
sections focus on the architecture, design, and implementation of key communication and
middleware services in ARMADA. Section 3 introduces real-time communication service.
Section 4 presents the RTCAST suite of group communication and fault-tolerance services.
Section 5 describes the RTPB (real-time primary-backup) replication service. Section 6
briefly discusses the dependability evaluation and validation tools developed in this project.
Section 7 concludes the paper.
2. Platform
The services developed in the context of the ARMADA project are to augment the essential
capabilities of a real-time microkernel by introducing a composable collection of
communication, fault-tolerance, and testing tools to provide an integrated framework for
developing and executing real-time applications. Most of these tools are implemented as
separate multithreaded servers. Below we describe the experimental testbed and implementation
environment common to the aforementioned services. A detailed description of
the implementation approach adopted for various services will be given in the context of
each particular service.
2.1. General Service Implementation Approach
One common aspect of different middleware services in a distributed real-time system
is their need to use intermachine communication. All ARMADA services either include
or are layered on top of a communication layer which provides the features required for
correct operation of the service and its clients. For example, RTCAST implements communication
protocols to perform multicast and integrate failure detection and handling into
the communication subsystem. Similarly, the Real-Time Channels service implements its
own signaling and data transfer protocols to reserve resources and transmit real-time data
along a communication path. Since communication seemed to warrant particular attention
in the context of this project, we developed a generic real-time communication subsystem
architecture. The architecture can be viewed as a way of structuring the design of
communication-oriented services for predictability, as opposed to being a service in itself.
This architecture is described in detail in Section 3 and is illustrated by an example service:
the Real-Time Channel. ARMADA communication services are generally layered on top
of IP, or UDP/IP. We do not use TCP because its main focus is reliability as opposed
to predictability and timeliness. Real-time communication protocols, on the other hand,
should be sensitive to timeliness guarantees, perhaps overriding the reliability requirement.
For example, in video conferencing and process control, occasional loss of individual data
items is preferred to receiving reliable streams of stale data. To facilitate the development
of communication-oriented services, our communication subsystem is implemented using
the x-kernel object-oriented networking framework originally developed at the University
of Arizona (Hutchinson and Peterson, 1991), with extensions for controlled allocation of
system resources (Travostino et al., 1996). The advantage of using x-kernel is the ease
of composing protocol stacks. An x-kernel communication subsystem is implemented
as a configurable graph of protocol objects. It allows easy reconfiguration of the protocol
stack by adding or removing protocols. More details on the x-kernel can be found
in (Hutchinson and Peterson, 1991).
Following a microkernel philosophy, argued for in Section 1, our services are designed
as user-level multithreaded servers. Clients of the service are separate processes that
communicate with the server via the kernel using a user library. The library exports the
desired middleware API. Communication-oriented services generally implement their own
protocol stack that lies on top of the kernel-level communication driver. The x-kernel
framework permits migration of multithreaded protocol stack execution into the operating
system kernel. We use this feature to implement server co-location into the microkernel.
Such co-location improves performance by eliminating extra context switches. Note that
the advantages of server co-location do not defeat the purpose of choosing a microkernel
over a monolithic operating system for a development platform. This is because with a
microkernel co-located servers (i) can be developed in user space which greatly reduces
their development and maintenance cost, and (ii) can be selectively included, when needed,
user
network
Application Application
device driver
(and protocol stack)
Server
Microkernel
Library
Library
Stub Stub
user
Application Application
device driver
Server
Colocated
Microkernel
network
Library Library
Stub
Stub
(a) User-level server configuration (b) Co-located server
Figure
3. Service implementation.
into the kernel in accordance with the application requirements; this is both more efficient
and more sensitive to particular application needs.
The microkernel has to support kernel threads. The priority of threads executing in kernel
space is, by default, higher than that of threads executing in user space. As a result, threads
run in a much more predictable manner, and the service does not get starved under overload.
Furthermore, the in-kernel implementation of x-kernel on our platform replaces some of
the threads in the device driver by code running in interrupt context. This feature reduces
communication latencies and makes the server less preemptable when migrated into the
microkernel. However, since code executing in interrupt context is kept to a minimum, the
reduction in preeptability has not been a concern in our experiences with co-located code.
Figure
3-a and 3-b illustrate the configurations of user-level servers and co-located
servers respectively. An example of server migration into the kernel is given in the context
of the RTCAST service in Section 4. The RTCAST server was developed in user space
(as in
Figure
3-a), then reconfigured to be integrated the into the kernel (as in Figure 3-b).
Whether the server runs in user space or is co-located in the microkernel, client processes
use the same service API to communicate with it. If the service is co-located in the kernel, an
extra context switch to/from a user-level server process is saved. Automatically-generated
stubs interface the user library (implementing the service API) to the microkernel or the
server process. These stubs hide the details of the kernel's local communication mechanism
from the programmer of the real-time service, thus making service code independent from
specifics of the underlying microkernel.
2.2. Testbed and Implementation Environment
In the followingsections we describe the implementationof each individual service. To provide
a common context for that description, we outline here the specifics of the underlying
implementation platform. Our testbed comprises several Pentium-based PCs (133 MHz)
connected by a Cisco 2900 Ethernet switch (10/100 Mb/s), with each PC connected to the
switch via 10 Mb/s Ethernet. We have chosen the MK 7.2 microkernel operating system
from the Open Group (OG) 1 Research Institute to provide the essential underlying real-time
support for our services. The MK microkernel is originally based on release 2.5 of the
Mach operating system from CMU. While not a full-fledged real-time OS, MK 7.2 supports
kernel threads, priority-based scheduling, and includes several important features that facilitate
provision of QoS guarantees. For example, MK 7.2 supports x-kernel and provides
a unified framework for allocation and management of communication resources. This
framework, known as CORDS (Communication Objects for Real-time Dependable Sys-
tems) (Travostino et al., 1996), was found particularly useful for implementing real-time
communication services. Our implementation approach has been to utilize the functionality
and facilities provided in OG's environment and augment it with our own support when
necessary.
From the standpoint of portability, although MK7.2 is a research operating system,
CORDS support is also available on more mainstream operating systems such as Windows
NT. Thus, our software developed for the CORDS environment can easily be ported to NT.
In fact, such port is currently underway. Porting to other operating systems, such as Linux,
is more difficult. At the time the presented services were developed Linux did not support
kernel threads. Thus, it was impossible to implement multithreaded protocol stacks inside
the Linux kernel. Linux 2.2, however, is expected to have full thread support. CORDS
support may be replaced by appropriate packet filters to classify incoming traffic. Thus,
with some modifications, our services may be ported to future versions of Linux, as well
as other multithreaded operating systems such as Solaris.
3. ARMADA Real-Time Communication Architecture
ARMADA provides applications witha communication architecture and service with which
they can request and utilize guaranteed-QoS connections between two hosts. In this section,
we hilight the architectural components of the communication service that, together with a
set of user-specified policies, can implement several real-time communication models.
Common to QoS-sensitive communication service models are the following three architectural
requirements: (i) performance isolation between connections or sets of connections
such that malicious behavior or overload of one does not starve resources of the other(s),
(ii) service differentiation, such as assigning different priorities to connections or classes
of connections, and (iii) graceful degradation in the presence of overload. We developed
a Communication Library for Implementing Priority Semantics (CLIPS), that provides
resource-management mechanisms to satisfy the aforementioned requirements. It exports
the abstraction of guaranteed-rate communication endpoints. The endpoint, called a clip,
guarantees a certain throughput in terms of the number of packets sent via it per period,
and implements a configurable buffer to accommodate bursty sources. One or more connections
8(or sockets) may be "bound" to the same clip, in which case the clip sets aside
enough processor bandwidth and memory resources on the end-system to guarantee an
aggregate specified throughput for the entire connection set. Different clips may have
different priorities to allow higher priority traffic to proceed first under overload conditions.
For example, traffic of a particular application or middleware service can be bound to a
high priority clip, thereby allowing that application or service to receive precedence over
other services. Each clip has an associated deadline parameter. The deadline specifies the
maximum communication subsystem response time for handling packets via the particular
clip. The CLIPS library implements a traffic policing mechanism, as well as its own default
admission control policy that can be disabled to revert to pure priority-driven scheduling or
overridden by a user-specified alternate admission control policy. More details on CLIPS
will be given below as we present the ARMADA real-time communication service we
developed for unicast communication.
3.1. Real-time Communication Service
We have used CLIPS to implement a guaranteed-QoS communication service called the
real-time channel (Ferrari and Verma, 1990, Kandlur et al., 1994). A real-time channel is
a unicast virtual connection between a source and destination host with associated performance
guarantees on message delay and available bandwidth. It satisfies three primary
architectural requirements for guaranteed-QoS communication (Mehra et al., 1996): (i)
maintenance of per-connection QoS guarantees, (ii) overload protection via per-connection
traffic enforcement, and (iii) fairness to best-effort traffic. Real-time communication via
real-time channels is performed in three phases. in the first phase, the source host S (sender)
creates a channel to the destination host D (receiver) by specifying the channel's traffic
parameters and QoS requirements. Signaling requests are sent from S to D via one or
more intermediate (I) nodes; replies are delivered in the reverse direction from D to S. If
successfully established, S can send messages on this channel to D; this constitutes the
second phase. When the sender is done using the channel, it must close the channel (the
third phase) so that resources allocated to this channel can be released.
Figure
4 illustrates the high-level software architecture of our guaranteed-QoS service at
end-hosts. The core functionalityof the communication service is realized via three distinct
components that interact to provide guaranteed-QoS communication. Applications use the
service via the real-time communication application programming interface (RTC API);
RTCOP coordinates end-to-end signaling for resource reservation and reclamation during
connection set-up or tear-down; and CLIPS performs run-time management of resources for
QoS-sensitive data transfer. Since platform-specific overheads must be characterized before
QoS guarantees can be ensured, an execution profiling component is added to measure and
parameterize the overheads incurred by the communication service on a particular platform,
and make these parameters available for admission control decisions. The control path taken
through the architecture during connection setup is shown in Figure 4 as dashed lines. Data
is then transferred via RTC API and CLIPS as indicated by the solid lines. Below, we
discuss the salient features of each architectural component of the service along with its
interaction with other components to provide QoS guarantees. We also describe how the
components are used to realize a particular service model.
reservation
APPLICATIONS
VIDEO
resource
RTCOP CLIPS
RT CONTROL AUDIO
signalling data transfer
REAL-TIME COMMUNICATION API
LOWER PROTOCOL STACK LAYERS
resource
requirements
parameters
1. memory buffer
2. pkts per period
resource
management mechanisms
POLICIES
QoS model
overheads execution
query/reply
to CLIPS
QoS model
translation
local
resources
execution
profiling
admission control
policy
Figure
4. Real-time communication service architecture: Our implementation consists of four primary
architectural components: an application programming interface (RTC API), a signaling and resource reservation
protocol (RTCOP), support for resource management and run-time data transfer (CLIPS), and execution profiling
support. Dashed lines indicate interactions on the control path while the data path is denoted by the solid lines.
3.2. RTC Application Interface
The programming interface exported to applications comprises routines for connection
establishment and teardown, message transmission and reception during data transfer on
established connections, and initialization and support routines. Table 1 lists some of the
main routines currently available in RTC API. The API has two parts: a top half that
interfaces to applications and is responsible for validating application requests and creating
internal state, and a bottom half which interfaces to RTCOP for signaling (i.e., connection
setup and teardown), and to CLIPS for QoS-sensitive data transfer.
The design of RTC API is based in large part on the well-known socket API in BSD
Unix. Each connection endpoint is a pair (IPaddr, port) formed by the IP address
of the host (IPaddr) and an unsigned 16-bit port (port) unique on the host, similar
to an INET domain socket endpoint. In addition to unique endpoints for data transfer, an
application may use several endpoints to receive signaling requests from other applications.
Applications willing to be receivers of real-time traffic register their signaling ports with
a name service or use well-known ports. Applications wishing to create connections must
first locate the corresponding receiver endpoints before signaling can be initiated.
Each of the signaling and data transfer routines in Table 1 has its counterpart in the socket
API. For example, the routinertcRegisterPort corresponds to the invocationof bind
and listen in succession, and rtcAcceptConnection corresponds to accept.
Similarly, the routines rtcCreateConnection and rtcDestroyConnection correspond
to connect and close, respectively.
The key aspect which distinguishes RTC API from the socket API is that the receiving
application explicitly approves connection establishment and teardown. When registering
Table
1. Routines comprising RTC API: This table shows the utility, signaling, and data transfer functions
that constitute the application interface. The table shows each function name, its parameters, the endpoint
that invokes it, and a brief description of the operation performed.
Routines Parameters Invoked By Function Performed
rtcInit none both service initialization
rtcGetParameter chan id, param type both query parameter on specified
real-time connection
rtcRegisterPort local port, agent function receiver register local port and
agent for signaling
rtcUnRegisterPort local port receiver unregister local signaling port
rtcCreateConnection remote host/port, QoS: sender create connection with given
burst size parameters to remote
delay endpoint; return connection id
rtcAcceptConnection local port, chan id, receiver obtain the next connection
remote host/port already established at
specified local port
rtcDestroyConnection chan id sender destroy specified real-time
connection
rtcSendMessage chan id, buf ptr sender send message on specified
real-time connection
rtcRecvMessage chand id, buf ptr receiver receive message on specified
real-time connection
its intent to receive signaling requests, the application specifies an agent function that is
invoked in response to connection requests. This function, implemented by the receiving
application, determines whether sufficient application-level resources are available for the
connection and, if so, reserves necessary resources (e.g., CPU capacity, buffers, etc.) for
the new connection. It may also perform authentication checks based on the requesting
endpoint specified in the signaling request. This is unlike the establishment of a TCP
connection, for example, which is completely transparent to the peer applications.
The QoS-parameters passed to rtcCreateConnection for connection establishment
describe a linear bounded arrival traffic generation process (Cruz, 1987, Anderson et al., 1990).
They specify a maximum message size (Mmax bytes), maximum message rate (Rmax mes-
sages/second), and maximum burst size (Bmax messages). Parameters Mmax and Rmax
are used to create a clip with a corresponding guaranteed throughput. The burst size, Bmax ,
determines the buffer size required for the clip. In the following we describe the end-to-end
signaling phase that coordinates end-to-end resource reservation.
3.3. Signaling and Resource Reservation with RTCOP
Requests to create and destroy connections initiate the Real-Time Connection Ordination
Protocol (RTCOP), a distributed end-to-end signaling protocol. As illustrated in Figure 5(a),
RTCOP is composed primarily of two relatively independent modules. The request and
reply handlers manage signaling state and interface to the admission control policy, and the
communication module handles the tasks of reliably forwarding signaling messages. This
separation allows simpler replacement of admission control policies or connection state
management algorithms without affecting communication functions. Note that signaling
and connection establishment are non-real-time (but reliable) functions. QoS guarantees
apply to the data sent on an established connection but signaling requests are sent as
best-effort traffic.
The request and reply handlers generate and process signaling messages, interface to
RTC API and CLIPS, and reserve and reclaim resources as needed. When processing a
new signaling request, the request handler invokes a multi-step admission control procedure
to decide whether or not sufficient resources are available for the new request. As a new
connection request traverses each node of the route from source to destination, the request
handler invokes admission control which decides if the new connection can be locally
admitted. Upon successful admission, the handler passes the request on to the next hop.
When a connection is admitted at all nodes on the route, the reply handler at the destination
node reserves the required end-system resources by creating a clip for the new real-time
channel, and generates a positive acknowledgment on the reverse path to the source. As
the notification is received at each hop, the underlying network-level protocol commits
network resources, such as link bandwidth, using assumed local router support. When
the acknowledgement is received at the source the reply handler notifies the application of
connection establishment and creates the source clip.
The communication module handles the basic tasks of sending and receiving signaling
messages, as well as forwarding data packets to and from the applications. Most of the
protocol processing performed by the communication module is in the control path during
processing of signaling messages. In the data path it functions as a simple transport pro-
2REAL-TIME COMMUNICATION API
handler
reply
request
handler
data transfer
CLIPS
system
interface
resource
requests/replies
ROUTING ENGINE
LOWER PROTOCOL LAYERS
request
message
data
module
communication
admission
control module
connection
Link allocation
comm. threads
message buffers
fragments
CLIPS
link scheduler
REAL-TIME COMMUNICATION API
transmission/reception
system
interface
resource
messages
LOWER
Passive resources
comm. thread
scheduler
yield/block
packets
CPU allocation
(a) RTCOP structure (b) CLIPS structure
Figure
5. Internal structures and interfaces: In this figure we show the internal functional structure of RTCOP
and CLIPS along with their respective interfaces to other components. In (a), data and control paths are represented
with solid and dashed lines, respectively.
tocol, forwarding data packets on behalf of applications, much like UDP. As noted earlier,
signaling messages are transported as best-effort traffic, but are delivered reliably using
source-based retransmissions. Reliable signaling ensures that a connection is considered
established only if connection state is successfully installed and sufficient resources reserved
at all the nodes along the route. The communication module implements duplicate
suppression to ensure that multiple reservations are not installed for the same connection establishment
request. Similar considerations apply to connection teardown where all nodes
along the route must release resources and free connection state. Consistent connection
state management at all nodes is an essential function of RTCOP.
RTCOP exports an interface to RTC API for specification of connection establishment
and teardown requests and replies, and selection of logical ports for connection endpoints.
The RTC API uses the latter to reserve a signaling port in response to a request from the
application, for example. RTCOP also interfaces to an underlying routing engine to query
an appropriate route before initiating signaling for a new connection. In general, the routing
engine should find a route that can support the desired QoS requirements. However, for
simplicity we use static (fixed) routes for connections since it suffices to demonstrate the
capabilities of our architecture and implementation.
3.4. CLIPS-based Resource Scheduling for Data Transfer
CLIPS implements the necessary end-system resource-management mechanisms to realize
QoS-sensitive real-time data transfer on an established connection. A separate clip is
created for each of the two endpoints of a real-time channel. Internal to each clip is a
message queue to buffer messages generated or received on the corresponding channel,
a communication handler thread to process these messages, and a packet queue to stage
packets waiting to be transmitted or received. The CLIPS library implements on the
end-system the key functional components illustrated in Figure 5(b).
QoS-sensitive CPU scheduling: The communication handler thread of a clip executes in
a continuous loop either dequeuing outgoing messages from the clip's message queue and
fragmenting them (at the source host), or dequeuing incoming packets from the clip's packet
queue and reassembling messages (at the destination host). Each message must be sent
within a given local delay bound (deadline). To achieve the best schedulable utilization,
communication handlers are scheduled based on an earliest-deadline-first (EDF) policy.
Since most operating systems do not provide EDF scheduling, CLIPS implements it with
a user-level scheduler layered on top of the operating system scheduler. The user-level
scheduler runs at a static priority and maintains a list of all threads registered with it,
sorted by increasing deadline. At any given time, the CLIPS scheduler blocks all of the
registered threads using kernel semaphores except the one with the earliest deadline, which
it considers in the running state. The running thread will be allowed to execute until it
explicitly terminates or yields using a primitive exported by CLIPS. The scheduler then
blocks the thread on a kernel semaphore and signals the thread with the next earliest
deadline. Preemption is implemented via a CLIPS primitive invoked upon sending each
packet. The primitive yields execution to a more urgent thread if one is pending. This
arrangement implements EDF scheduling within a single protection domain.
Resource reservation: Communication handlers (implemented by CLIPS) execute a user-defined
protocol stack, then return to CLIPS code after processing each message or packet.
Ideally, each clip should be assigned a CPU budget to prevent a communication client
from monopolizing the CPU. Since processor capacity reserves are not available on most
operating systems, the budget is indirectly expressed in terms of a maximum number of
packets to be processed within a given period. The handler blocks itself after processing
the maximum number of packets allowed within its stated time period.
Policing: Associating a budget with each connection handler facilitates traffic enforcement.
This is because a handler is scheduled for execution only when the budget is non-zero, and
the budget is not replenished until the next (periodic) invocation of the handler. This
mechanism ensures that misbehaving connections are policed to their traffic specification.
QoS-sensitive link bandwidth allocation: Modern operating systems typically implement
FIFO packet transmission over the communication link. While we cannot avoid FIFO
queuing in the kernel's network device, CLIPS implements a dynamic priority-based link
scheduler at the bottom of the user-level protocol stack to schedule outgoing packets in
a prioritized fashion. The link scheduler implements the EDF scheduling policy using a
priority heap for outgoing packets. To prevent a FIFO accumulation of outgoing packets
in the kernel (e.g., while the link is busy), the CLIPS link scheduler does not release a new
packet until it is notified of the completion of previous packet transmission. Best-effort
packets are maintained in a separate packet heap within the user-level link scheduler and
serviced at a lower priority than those on real-time clips.
Figure
6 demonstrate traffic policing, traffic isolation and performance differentiation in
real-time channels. A more detailed evaluation is found in (Mehra et al., 1998).
Offered load on channel 1 (KB/s)50150250Delivered
throughput
(KB/s)
measured throughput (ch 1)
measured throughput (ch 2)
specified throughput (ch 1)
specified throughput (ch 2)
50 100 150 200 250 300 350
Offered load on best-effort channel (KB/s)50150250Delivered
throughput
(KB/s)
(a) Isolation between real-time channels (b) Isolation between best-effort and real-time
Figure
6. Traffic isolation: The left graph shows that real time channel 1 is policed to its traffic specification,
disallowing violation of that specification. Traffic on real-time channel 1 does not affect the QoS for the other
real-time channel 2. The right graph shows that increasing best-effort load does not interfere with real-time
channel throughput.
4. RTCAST Group Communication Services
The previoussection introduced the architecture of the ARMADA real-time communication
service. This architecture sets the ground for implementing real-time services with QoS-sensitive
communication. The second thrust of the project has focused on a collection of
such services, that provide modular and composable middleware for constructingembedded
applications. The ARMADA middleware can be divided into two relatively independent
suites of services:
ffl RTCAST group communication services, and
ffl RTPB real-time primary-back replication service.
This section presents the RTCAST suite of group communication and fault-tolerance ser-
vices. Section 5 describes the RTPB (real-time primary-backup) replication service.
4.1. RTCAST Protocols
The QoS-sensitive communication service described in Section 3 does not support multicast
channels. Multicast is important, e.g., for efficient data dissemination to a set of
destinations, or for maintaining replicated state in fault-tolerant systems. If consistency of
replicated state is desired, a membership algorithm is also needed. RTCAST complements
aforementioned unicast communication services by mulitcast and membership services
for real-time fault-tolerant applications. RTCAST is based around the process groups
paradigm.
Process groups are a widely-studied paradigm for designing distributed systems in both
asynchronous (Birman, 1993, Amir et al., 1992, van Renesse et al., 1994, Mishra, 1993)
and synchronous (Kopetz and Gr-unsteidl, 1994, Amir et al., 1995, Cristian et al., 1990) en-
vironments. In this approach, a distributed system is structured as a group of cooperating
processes which provide service to the application. A process group may be used, for
example, to provide active replication of system state or to rapidly disseminate information
from an application to a collection of processes. Two key primitives for supporting process
groups in a distributed environment are fault-tolerant multicast communication and group
membership. Coordination of a process group must address several subtle issues including
delivering messages to the group in a reliable fashion, maintaining consistent views
of group membership, and detecting and handling process or communication failures. If
multicast messages are atomic and globally ordered, consistency of replicated state will be
guaranteed.
RTCAST is especially designed for real-time applications. In a real-time application,
timing failures may be as damaging as processor failures. Thus, our membership algorithm
is more aggressive in ensuring timely progress of the process group. For example, while
ensuring atomicity of message delivery, RTCAST does not require acknowledgments for
every message, and message delivery is immediate without needing additional "rounds"
of message transmissions to ensure that a message was received consistently by all des-
tinations. RTCAST is designed to support hard real-time guarantees without requiring a
static schedule to be computed a priori for application tasks and messages. Instead, an
on-line schedulability analysis component performs admission control on multicast mes-
sages. We envision the proposed multicast and membership protocols as part of a larger
suite of middleware group communication services that form a composable architecture for
the development of embedded real-time applications.
As illustrated in Figure 7, the RTCAST suite of services include a timed atomic multicast,
a group membership service and an admission control service. The first two are tightly
coupled and thus are considered a single service. Clock synchronization is typically
required for real-time protocols and is enforced by the clock synchronization service. To
support portability, a virtual network interface layer exports a uniform network abstraction.
Ideally, this interface would transparently handle different network topologies, each having
different connectivity and timing or bandwidth characteristics exporting a generic network
abstraction to upper layers. The network is assumed to support unicast datagram service.
Finally, the top layer provides an application programming interface for real-time process
group.
RTCAST supports bounded-time message transport, atomicity, and order for multicasts
within a group of communicating processes in the presence of processor crashes and communication
failures. It guarantees agreement on membership among the communicating
processors, and ensures that membership changes (e.g., resulting from processor joins or
departures) are atomic and ordered with respect to multicast messages. RTCAST assumes
that processes can communicate with the environment only by sending messages. Thus, a
failed process, for example, cannot adversely affect the environment via a hidden channel.
RTCAST proceeds as senders in a logical ring take turns in multicasting messages over
the network. A processor's turn comes when the logical token arrives, or when it times
out waiting for it. After its last message, each sender multicasts a heartbeat that is used
for crash detection. The heartbeat received from an immediate predecessor also serves as
the logical token. Destinations detect missed messages using sequence numbers and when
Unreliable Unicast Communication
Virtual Network Interface
Real-Time Process Groups API
Admission Control and
Schedulability Analysis
Timed Atomic
Multicast
Communication
Group
Membership
Service
Clock Synchronization
Figure
7. Software architecture for the RTCAST middleware services.
a processor detects a receive omission, it crashes. Each processor, when its turn comes,
checks for missing heartbeats and eliminates the crashed members, if any, from group
membership by multicasting a membership change message.
In a token ring, sent messages have a natural order defined by token rotation. We reconstruct
message order at the receivers using a protocol layer below RTCAST which detects
out-of-order arrival of messages and swaps them, thus forwarding them to RTCAST in
correct order. RTCAST ensures that "correct" members can reach agreement on replicated
state by formulating the problem as one of group membership. Since the state of a process
is determined by the sequence of messages it receives, a processor that detects a message
receive omission takes itself out of the group, thus maintaining agreement among the remaining
ones. In a real-time system one may argue that processes waiting for a message
that does not arrive will miss their deadlines anyway, so it is acceptable to eliminate the
which suffered receive omissions. 2 A distinctive feature of RTCAST is that
processors which did not omit any messages can deliver messages as soon as they arrive
without compromising protocol semantics. Thus, for example, if a reliable multicast is
used to disseminate a critical message to a replicated server, and if one of the replicas
suffers a receive omission, RTCAST will eliminate that replica from the group, while delivering
the message to the remaining replicas immediately. This is in contrast to delaying
delivery of the message until all replicas have received it. The approach is motivated by
the observation that in a real-time system it may be better to sacrifice one replica in the
group than delay message delivery potentially causing all replicas to miss a hard timing
constraint. Finally, membership changes are communicated exclusively by membership
change messages using our multicast mechanism. Since message multicast is atomic and
ordered, so are the membership changes. This guarantees agreement on membership view.
From an architectural standpoint, RTCAST operation is triggered by two different event
types, namely message reception, and token reception (or timeout). It is therefore logically
1. msg reception handler()
2. if state = RUNNING
3. if more msgs from same member
4. if missed msgs ! CRASH else
5. deliver msg
6. else if msg from different member
7. if missed msgs ! CRASH else
8. check for missed msgs from processors between current and last senders
9. if no missing msgs
10. deliver current msg
11. else CRASH
12. else if join msg from non-member
13. handle join request
14. if state = JOINING AND msg is a valid join ack
15. if need more join acks
16. wait for additional join acks
17. else state = RUNNING
18. end
Figure
8. Message reception handler
structured as two event handlers, one for each event type. The message reception handler
Figure
detects receive omissions if any, delivers messages in order to the application,
and services protocol control messages. The token handler (Figure invoked when
the token is received or when the token timeout expires. It detects processor crashes and
sends membership change notifications, if any, as well as lets client processes send out their
messages during the processors finite token hold time.
4.2. RTCAST Design and Implementation
This section describes some of the major issues in the design and implementation of RT-
our representative group communication service. A thorough performance evaluation
of the service is reported on in (Abdelzaher et al., 1996) and (Abdelzaher et al., 1997).
The RTCAST application was implemented and tested over a local Ethernet. Ethernet is
normally unsuitable for real-time applications due to packet collisions and the subsequent
retransmissions that make it impossible to impose deterministic bounds on communication
delay. However, since we use a private Ethernet (i.e. the RTCAST protocol has exclusive
access to the medium), only one machine can send messages at any given time (namely,
the token holder). This prevents collisions and guarantees that the Ethernet driver always
succeeds in transmitting each packet on the first attempt, making message communication
delays deterministic. The admission control service described previously can take
1. token handler()
2. if state = RUNNING
3. for each processor p in current membership view
4. if no heartbeat seen from all predecessors incl. p
5. remove p from group view
6. multicast new group view
7. send out all queued messages
8. mark the last msg
9. send out heartbeat msg
10. if state = JOINING
11. send out join msg
12. end
Figure
9. Token handler
advantage of this predictability, e.g., by creating appropriate clips to manage end-system
resources on each host and make real-time guarantees on messages sent with RTCAST.
4.2.1. Protocol Stack Design The RTCAST protocol was designed to be modular, so
that individual services could be added, changed, or removed without affecting the rest
of the protocol. Each service is designed as a separate protocol layer within the x-kernel
(Hutchinson and Peterson, 1991) protocol framework. The x-kernel is an ideal
choice for implementing the RTCAST middleware services because application requirements
can be easily met by simply reconfiguring the protocol stack to add or remove
services as necessary. The RTCAST implementation uses the following protocol layers:
Admission Control: The Admission Control and Schedulability Analysis (ACSA) layer
is a distributed protocol that keeps track of communication resources of the entire process
group. The protocol transparently creates a clip on each host that runs the process group
to ensure communication throughput guarantees and time-bounded message processing.
It can support multiple either prioritized or performance isolated process groups on the
same machine by creating clips of corresponding priority and corresponding minimum
throughput specification. If real-time guarantees are not needed, this layer can be omitted
from the protocol stack to reduce overhead. Communication will then proceed on best-effort
basis.
RTCAST: The RTCAST protocol layer encompasses the membership, logical token ring,
and atomic ordering services described in section 4.
Multicast Transport: This protocol implements an unreliable multicast abstraction that
is independent of the underlying network. RTCAST uses the multicast transport layer to
send messages to the group without having to worry about whether the physical medium
provides unicast, broadcast, or true multicast support. The details of how the messages
are actually sent over the network are hidden from higher layers by the multicast transport
USER
KERNEL
USER
KERNEL
Kernel Ethernet Driver Kernel Ethernet Driver
IP
ACSA
RTCAST
MCAST
RTCAST
MCAST
IP
ASCA
(a) CORDS User-level Server (b) Split In-kernel CORDS Server
APPLICATION
APPLICATION
Figure
10. RTCAST protocol stack as implemented
protocol, so it is the only layer that must be modified when RTCAST is run on different
types of networks.
Figure
shows the full protocol stack as it is implemented on our platform.
4.2.2. Integration Into the Mach Kernel As figure 10 shows, the protocol stack representing
the core of the service was migrated into the Mach kernel. While actual RTCAST
development took place in user space to facilitate debugging, its final co-location within the
Mach kernel has several performance advantages. First, as with any group communication
protocol, there can be a high amount of CPU overhead to maintain the group state and
enforce message semantics. By running in the kernel, the RTCAST protocol can run at the
highest priority and minimize communication latency due to processing time. Second, in
the current implementation of MK 7.2 there is no operating system support for real-time
scheduling or capacity reserve. Experience shows that processes running at the user level
can be starved for CPU time for periods of up to a few seconds, which would be disastrous
for RTCAST's predictable communication. By running in the kernel, protocol threads
do not get starved significantly and are scheduled in a much more predictable manner
by the operating system. Finally, there is a problem with the MK 7.2 implementation of
the x-kernel, such that threads which are shepherding messages up the protocol stack can
be queued to run in a different order than the messages arrive from the network. This
results in out-of-order messages that must be buffered and re-ordered to maintain the total
ordering guarantees provided by the protocol. Having to buffer and reorder messages also
delays crash detection, since there is no way of knowing if a missing message is queued
somewhere in the protocol stack or if the sender suffered a failure. By running the protocol
in the kernel, message threads are interrupt driven and run immediately after arriving from
the network, so the message reordering problem does not occur. Protocol performance improved
0almost by an order of magnitude when executed in the kernel. For example, when
executed at the user-level, the minimum token rotation time was on average 2.6 ms, 5.7
ms, and 9.6 ms for groups with one, two, and three members respectively. When running
in the kernel, the same measurement yielded token rotation times of 0.43 ms, 1.02 ms, and
1.55 ms. We found that this improvement extended to all aspects of protocol performance.
Note that the above figures suggest a potential scalability problem for larger group sizes
(such as hundreds of nodes). The problem is attributed to the need for software token
passing. Integration with hardware token passing schemes, such as FDDI, will yield much
better performance. Alternatively, to improve scalability, we are currently investigating an
approach based on group composition. Larger process groups are formed by a composition
of smaller ones. This research is presently underway. Initial results show that composite
process groups scale much better than monolithic ones.
Another important focus in developing our group communication middleware was designing
a robust API that would allow application developers to take advantage of our
services quickly and easily. RTCAST API includes (i) bandwidth reservation calls, (ii)
process group membership manipulation functions, (iii) best-effort multicast communication
primitives and (iv) reliable real-time multicast. Bandwidth reservation is used on hosts
to ensure that a multicast connection has dedicated CPU capacity and network bandwidth
(i.e. a minimum token hold time). The token hold time and token rotation period specify
the communication bandwidth allotted to the node. The node can set aside enough end-system
resources to utilize its allotted communication bandwidth by creating a clip (by the
ACSA layer) of a corresponding throughput thereby providing schedulability guarantees.
The membership manipulation functions allow processes to join and leave the multicast
group, query current group membership, create groups, etc. There are two types of group
communication: real-time multicast communication that guarantees end-to-end response
time, and best-effort which does not. The advantages of using a best-effort connection is
that it is optimized for throughput as opposed to meeting individual message deadlines.
Thus, the service protocol stack is faster on the average (e.g., no per-message admission
control), but the variance in queuing delays is higher.
We collaborated with a group of researchers at the Honeywell Technology Center to implement
a subset of the fault-tolerant real-time distributed applicationdescribed in Section 1
using the RTCAST protocol. Using the insights gained from this motivating application,
we were able to refine the API to provide the required of functionality while maintaining
a simple interface that is easy to program. Based on our experience of the application's
use of the protocol, we also designed a higher-level service library that can be linked with
the application, and which uses the RTCAST API 3 . It is concerned with resource management
in a fault-tolerant system and with providing higher-level abstractions of the protocol
communication primitives. The service library provides for logical processing nodes and
resource pools that transparently utilize RTCAST group communication services. These
abstractions provide a convenient way for application developers to reason about and structure
their redundancy management and failure handling policies while RTCAST does the
actual work of maintaining replica consistency.
5. Real-Time Primary-backup (RTPB) Replication Service
While the previous section introduced a middleware service for active replication, in this
section we present the overall architecture of the ARMADA real-time primary-backup
replication service. We first give an introduction to the RTPB system, then describe the
service framework. Finally we discuss implementation of the service that we believe meets
the objectives.
5.1. Introduction to RTPB
Keeping large amounts of application state consistent in a distributed system, as in the
state machine approach, may involve a significant overhead. Many real-time applications,
however, can tolerate minor inconsistencies in replicated state. Thus, to reduce redundancy
management overhead, our primary-backup replication exploits application data semantics
by allowing the backup to maintain a less current copy of the data that resides on the
primary. The application may have distinct tolerances for the staleness of different data
objects. With sufficiently recent data, the backup can safely supplant a failed primary; the
backup can then reconstruct a consistent system state by extrapolating from previous values
and new sensor readings. However, the system must ensure that the distance between the
primary and the backup data is bounded within a predefined time window. Data objects
may have distinct tolerances in how far the backup can lag behind before the object state
becomes stale. The challenge is to bound the distance between the primary and the backup
such that consistency is not compromised, while minimizing the overhead in exchanging
messages between the primary and its backup.
5.2. Service Framework
A very important issue in designing a replication service is its consistency semantics.
One category of consistency semantics that is particular relevant to the primary-backup
replication in a real-time environment is temporal consistency, which is the consistency
view seen from the perspective of the time continuum. Two types of temporal consistency
are often needed to ensure proper operation of a primary-backup replicated real-time data
services system. One is the external temporal consistency between an object of the external
world and its image on the servers, the other is the inter-object temporal consistency
between different objects or events.
A primary-backup system is said to satisfy the external temporal consistency for an object
if the timestamp of i at the server is no later than a predetermined time from its timestamp
at the client (the real data). In other words, in order to provide meaningful and correct
service, the state of the primary server must closely reflect that of the actual world. This
consistency is also needed at the backup if the backup were to successfully replace the
primary when the primary fails. The consistency restriction placed on the backup may not
be as tight as that on the primary but must be within a tolerable range for the intended
applications.
The inter-object temporal consistency is maintained if for any object pair, their temporal
(which is the temporal distance of any two neighboring updates for object i,
and j, respectively) is observed at both primary and backup.
Although the usefulness or practical application of the external temporal consistency
concept is easy to see, the same is not true for inter-object temporal consistency. To
illustrate the notion of inter-object temporal consistency, considering an airplane during
taking off. There is a time bound between accelerating the plane and the lifting of the plane
into air because the runway is of limited length and the airplane can not keep accelerating
on the runway indefinitely without lifting off. In our primary-backup replicated real-time
data service, the inter-object temporal consistency constraint between an object pair placed
on the backup can be different from that placed on the primary.
5.3. RTPB Implementation
A temporal consistency model for the Real-time Primary-backup (RTPB) replication service
has been developed (Zou and Jahanian, 1998) and a practical version of the system that
implements the models has been built. Following our composability model, the RTPB
service is implemented as an independent user-level x-kernel based server on our MK
7.2 based platform. Our system includes a primary server and a backup server. A client
application resides in the same machine as the primary. The client continuously senses the
environment and periodically sends updates to the primary. The client accesses the server
using a library that utilizes the Mach IPC-based interface. The primary is responsible for
backing up the data on the backup site and limiting the inconsistency of the data between
the two sites within some required window. The following assumptions are made in the
implementation:
ffl Link failures are handled using physical redundancy such that network partitions are
avoided.
ffl An upper bound exists on the communication delay between the primary and the backup.
Missed message deadlines are treated as communication performance failures.
ffl Servers are assumed to suffer crash failures only.
Figure
shows our system architecture and the x-kernelprotocol stack for the replication
server. The bottom five layers (RTPB to ETHDRV) make up the x-kernel protocol stack.
At the top level of the stack is our real-time primary-backup (RTPB) protocol. It serves
as an anchor protocol in the x-kernel protocol stack. From above, it provides an interface
to the x-kernel based server. From below, it connects with the rest of the protocol stack
through the x-kernel uniform protocol interface. The underlying transport protocol is
UDP. Since UDP does not provide reliable delivery of messages, we need to use explicit
acknowledgments when necessary.
The top two layers are the primary-backup hosts and client applications. The primary
host interacts with the backup host through the underlying RTPB protocol. There are
two identical versions of the client application residing on the primary and backup hosts
respectively. Normally, only the client version on the primary is running. But when the
OSF MACH KERNEL OSF MACH KERNEL
Ethernet
Backup
RTPB API
UDP
IP
Primary
RTPB API
UDP
IP
x-kernel
paths
RTPB server RTPB server
Figure
11. RTPB architecture and server protocol stack
backup takes over in case of primary failure, it also activates the backup client version and
bring it up to the most recent state.
The client application interacts with the RTPB system through the Mach API interface
we developed for the system. The interface enables the client to create, destroy, manipulate
and query reliable objects (i.e., those backed-up by our server). Specifically, rtpb create,
destroy creates objects on and destroys objects from the RTPB system; rtpb register
register objects with the system; rtpb update, rtpb query update and query objects; finally
list return a list of objects that are already registered with the RTPB system. Further
detail on admission control, update scheduling, failure detection and recovery appears in a
recent report (Zou and Jahanian, 1998).
5.4. RTPB Performance
The following graph shows the RTPB response time to client request and the temporal
distance between the primary and backup. Both graphs are depicted as a function of the
number of objects admitted into the system and are for four different client write rates of
100, 300, 700, and 1000 milliseconds.
Graph (a) shows a fast response time to client request in the range of 200 to 400 mi-
croseconds. This mainly due to the decoupling of client request process from updates to
the backups. Graph (b) shows that RTPB keeps the backup very close to the primary in
terms of the temporal distance between the corresponding data copies of the replicated
objects. In the graph, the distance ranges from 10 to 110 milliseconds which is well within
the range tolerable by most real-time applications.
The two graphs show that RTPB indeeds provide fast response to client requests while
maintain backup(s) very close to the primary in system state.
Number of Object Accepted at Primary100.0300.0Client
Response
Time
(microseconds)
window ms
window ms
window ms
window ms
100.0 200.0 300.0 400.0 500.0
Number of Objects Accepted at Primary50.0150.0Average
Maximum
Distance
(a) Response time to client (b) Primary-backup distance
Figure
12. RTPB performance graphs
6. Evaluation Tools
The third thrust of the ARMADA project is to provide tools for validating and evaluating
the timeliness and fault tolerance capabilities of the target system. Two tools have been
developed to date: Orchestra, a message-level fault injection tool for validation and
evaluation of communication and middleware protocols, and Cogent, a network traffic
workload generator. The following two subsections describe the two tools briefly.
6.1. Orchestra
The ARMADA project has been primarily concerned with developing real-time distributed
middleware protocols and communication services. Ensuring that a distributed system
or communication protocol meets its prescribed specification is a growing challenge that
confronts software developers and system engineers. Meeting this challenge is particularly
important for applications with strict dependability and timeliness constraints. Orchestra
is a fault injection environment which can be used to perform fault injection on
communication protocols and distributed applications. Orchestra is based on a simple
yet powerful framework, called script-driven probing and fault injection. The emphasis of
this approach is on experimental techniques intended to identify specific "problems" in a
protocol or its implementation rather than the evaluation of system dependability through
statistical metrics such as fault coverage (e.g. (Arlat et al., 1990)). Hence, the focus is on
developing fault injection techniques that can be employed in studying three aspects of a
target protocol: i) detecting design or implementation errors, ii) identifying violations of
protocol specifications, and iii) obtaining insights into the design decisions made by the
implementors.
In the Orchestra approach, a fault injection layer is inserted into the communication
protocol stack below the protocol to be tested. As messages are exchanged between
protocol participants, they pass through the fault injection layer on their path to/from the
network. Each time a message is sent, Orchestra runs a script called the send filter
on the message. In the same manner, the receive filter is invoked on each message that is
received from the network destined for the target protocol. The scripts perform three types
of operations on messages:
ffl Message filtering: for intercepting and examining a message.
ffl Message manipulation: for dropping, delaying, reordering, duplicating, or modifying
a message.
ffl Message injection: for probing a participant by introducing a new message into the
system.
The Orchestra toolset on the MK 7.2 platform is based on a portable fault injection
core, and has been developed in the CORDS-based x-kernel framework provided
by OpenGroup. The tool is implemented as an x-kernel protocol layer which can be
placed at any level in an x-kernel protocol stack. This tool has been used to perform
experiments on both the Group Interprocess Communication (GIPC) services from Open-
Group, and middleware and real-time channel services developed as part of the ARMADA
project. Further details on Orchestra can be found in several recent reports,
e.g., (Dawson et al., 1996, Dawson et al., 1997).
6.2. Cogent: COntrolled GEneration of Network Traffic
In order to demonstrate the utility of the ARMADA services, it is necessary to evaluate
them under a range of operating conditions. Because many of the protocols developed rely
on the communication subsystem, it is important to evaluate them under a range of realistic
background traffic. Generating such traffic is fairly difficult since traffic characteristics can
vary widely depending on the environment in which these services are deployed. To this
end, we've developed Cogent (COntrolled GEneration of Network Traffic). Cogent is
a networked synthetic workload generator for evaluating system and network performance
in a controlled, reproducible fashion. It is based on a simple client-server model and allows
the user to flexibly model network sources in order to evaluate various aspects of network
and distributed computing.
Implemented in C++ with a lex/yacc front end, the current version of the tool takes
a high level specification of the distributed workload and generates highly portable C++
code for all of the clients and servers specified. The user can select from a number
of distributions which have been used to model a variety of network sources such as
Poisson (Paxson and Floyd, 1994, Paxson, 1994), Log Normal (Paxson and Floyd, 1994),
Pareto (Leland et al, 1994, Crovella and Bestavros, 1996, Garret and Willinger, 1994), and
Log Extreme (Paxson, 1994). The tool then generates the necessary compilation and
distribution scripts for building and running the distributed workload.
Cogent has also been implemented in Java. Both the generator and the generated code
are Java based. Because of the portability of Java, this implementation simplifies both
the compilation and distribution of the workload considerably. We also plan on addressing
CPU issues in order to model common activities at the end hosts as well. Another feature
being added is the ability for a client or a server to be run in trace-driven mode. That is,
to run from a web server or a tcpdump (McCanne and Jacobson, 1993) log file. Finally,
we will be implementing additional source models in order to keep up with the current
literature.
7. Conclusions
This paper presented the architecture and current status of the ARMADA project conducted
at the University of Michigan in collaboration with the Honeywell Technology Center. We
described a number of communication and middleware services developed in the context of
this project, and illustrated the general methodology adopted to design and integrate these
services. For modularity and composability, ARMADA middleware was realized as a set
of servers on top of a microkernel-based operating system. Special attention was given
to the communication subsystem since it is a common resource to middleware services
developed. We proposed a general architecture for QoS sensitive communication, and also
described a communication service that implements this architecture.
We are currently redesigning an existing command and control application to benefit from
ARMADA middleware. The application requires bounded time end-to-end communication
delays guaranteed by our communication subsystem,as well as fault-tolerant replication and
backup services provided by our RTCAST group communication and membership support,
and the primary-backup replication service. Testing tools such as ORCHESTRA will help
assess communication performance and verify the required communication semantics.
Controlled workload generation using COGENT can assist in creating load conditions of
interest that may be difficult to exercise via regular operation of the application.
Our services and tools are designed independently of the underlying microkernel or the
communication subsystem; our choice of experimentation platform was based largely on
the rich protocol development environment provided by x-kernel and CORDS. For better
portability, we are extending our communication subsystem to provide a socket-like API.
We are also investigating the scalability of the services developed. Scaling to large embedded
systems may depend on the way the system is constructed from smaller units. We are
looking into appropriate ways of defining generic structural system components and composing
large architectures from these components such that certain desirable properties are
globally preserved. Developing the "tokens" and "operators" of such system composition
will enable building predictable analytical and semantic models of larger systems from
properties of their individual constituents.
Notes
1. Open Group is formerly known as the Open Software Foundation (OSF)
2. A lower communication layer may support a bounded number of retransmissions.
3. The APIs for both the service library and the RTCAST protocol are available at
http://www.eecs.umich.edu/RTCL/armada/rtcast/api.html.
--R
Lightweight multicast for real-time process groups
--TR
The X-Kernel
The process group approach to reliable distributed computing
TTP-A Protocol for Fault-Tolerant Real-Time Systems
On the self-similar nature of Ethernet traffic (extended version)
Empirically derived analytic models of wide-area TCP connections
Analysis, modeling and generation of self-similar VBR video traffic
The Totem single-ring ordering and membership protocol
Self-similarity in World Wide Web traffic
Experiments on six commercial TCP implementations using a software fault injection tool
Fault-tolerance in the advanced automation system
Real-Time Communication in Multihop Networks
Testing of fault-tolerant and real-time distributed systems via protocol fault injection
RTCAST
Structuring communication software for quality-of-service guarantees
Realizing Services for Guaranteed-QoS Communication on a Microkernel Operating System
Real-Time Primary-Backup (RTPB) Replication with Temporal Consistency Guarantees
Design and Performance of Horus: A Lightweight Group Communications System | fault-tolerant systems;communication protocols;distributed real-time systems |
325396 | A Highly Available Local Leader Election Service. | AbstractWe define the highly available local leader election problem, a generalization of the leader election problem for partitionable systems. We propose a protocol that solves the problem efficiently and give some performance measurements of our implementation. The local leader election service has been proven useful in the design and implementation of several fail-aware services for partitionable systems. | Introduction
T HE leader election problem [1] requires that a unique
leader be elected from a given set of processes. The
problem has been widely studied in the research community
[2], [3], [4], [5], [6]. One reason for this wide interest is
that many distributed protocols need an election protocol
as a sub-protocol. For example, in an atomic broadcast
protocol the processes could elect a leader that orders the
broadcasts so that all correct processes deliver broadcast
messages in the same order. The highly available leader
election problem was defined in [7] as follows: (S) at any
point in time there exists at most one leader, and (T) when
there is no leader at time s, then within at most - time
units a new leader is elected.
The highly available leader election service was first defined
for synchronous systems in which all correct processes
are connected, that is, can communicate with each other in
a timely manner. Recently, the research in fault-tolerant
systems has been investigating asynchronous partitionable
systems [8], [9], i.e. distributed systems in which the set
of processes can split in disjoint subsets due to network
failures or excessive performance failures (i.e. processes or
messages are not timely ; see Section III for details). Like
many other authors do, we call each such subset a parti-
tion. For example, processes that run in different LANs
can become partitioned when the bridge or the network
that connects the LANs fails or is "too slow" (see Figure
4). One reason for the research in partitionable systems
is that the "primary partition" approaches [10] allow only
the processes in one partition to make progress. To increase
the availability of services, one often wants services
to make progress in all partitions.
Our recent design of a membership [11] and a clock synchronization
service for partitionable systems [12] has indicated
that we need a leader election service with different
Department of Computer Science & Engineering,
University of California, San Diego, La Jolla, CA
92093\Gamma0114. e-mail: cfetzer@cs.ucsd.edu, flaviu@cs.ucsd.edu,
http://www.cs.ucsd.edu/~cfetzer. This research was supported
by grants F49620-93 and F49620-96 from the Air Force Office of
Scientific Research. An earlier version of this paper appeared in the
Proceedings of the Sixth IFIP International Working Conference on
Dependable Computing for Critical Applications (DCCA-6). For
more information see http://www.cs.ucsd.edu/~cfetzer/HALL.
properties for partitionable systems than for synchronous
systems. The first problem that we encountered is how to
specify the requirements of such a local leader election ser-
vice. Ideally, such a service should elect exactly one local
leader in each partition. However, it is not always possible
to elect a leader in each partition. For example, when
the processes in a partition suffer excessive performance
failures, one cannot enforce that there exists exactly one
local leader in that partition. To approach this problem,
we have to define in what partitions local leaders have to
be elected: we introduce therefore the notion of a stable
partition. Informally, all processes in a stable partition are
connected to each other, i.e. any two processes in a stable
partition can communicate with each other in a timely
manner. The processes in a stable partition are required to
elect a local leader within a bounded amount of time. An
election service might be able to elect a local leader in an
unstable partition, i.e. a partition that is not stable, but it
is not guaranteed that there will be a local leader in each
unstable partition. We call a process "unstable" when it is
part of an unstable partition.
In each stable partition, a local leader election service
has to elect exactly one local leader. In an unstable partition
the service might not be able to elect exactly one
local leader. It can be advantageous to split an unstable
partition into two or more "logical partitions" with one
local leader each if that enables the processes in each of
these logical partitions to communicate with each other in
a timely manner (see Figure 1). To explain this, note that
our definition of a "stable partition" will require that all
processes in such a partition be connected to each other.
This implies that when the connected relation in a partition
is not transitive, that partition is unstable. For example,
the connected relation can become non-transitive for three
processes fp; q; rg if the network link between p and r fails
or is overloaded while the links between p and q and q and
r stay correct (see Figure 2).
In specific circumstances, our local leader service splits
an unstable partition into two or more logical partitions
with one leader in each. The service makes sure that a
timely communication between any two processes in a logical
partition is possible. However, sometimes this communication
has to go via the local leader in case two processes
and r in a logical partition are only connected through
the local leader q (see Figure 2.b). Informally, a logical
partition created by our local leader service is a set of processes
such that the local leader of this logical partition can
communicate in a timely fashion with all processes in the
logical partition.
The scenario depicted by Figure 1 can be one such situ-
ation, where two logical partitions with one leader in each
are created. However, when logical partitions are created,
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, TO APPEAR IN 1999
connected local leader logical partition
q u
r
Fig. 1. A local leader election service is permitted to split an unstable
partition fp,q,r,u,v,wg into two logical partitions fp,q,rg and
fu,v,wg.
it is not done trivially; in particular, we prohibit that case
where a local leader service simply elects all unstable processes
as local leaders. For example, in an "unstable" trio
that consists of two processes p and r that are connected
to a third process q but are not connected to each other,
only one of the three processes is permitted to become local
leader (see Figure 2). Note that we do not want to have
two local leaders p and r even when these two processes
are only indirectly connected through q (see Figure 2.g).
The intuition behind this restriction is that the election of
a local leader l has to be "supported" by all processes connected
to l. Since p and q cannot both get the support of
p, at most one of the two processes is allowed to become
leader.
local leader illegal
connected
r
r
r
r
r
r
r
r
(b) (c) (d)
(a)
Fig. 2. The trio p; q; r is unstable since p and r are not connected.
A local leader election service must elect at most one process in
the trio as local leader.
We derive in this paper a formal specification for the
highly available local leader election service. The specification
implies that a local leader service creates logical
partitions so that (1) logical partitions never overlap, (2)
stable partitions are subsets of logical partitions, (3) two
local leaders are always in two separate logical partitions,
and (4) logical partitions are such that processes in one
partition are not connected to the local leader of any other
logical partition.
In this paper we propose an efficient protocol that implements
a highly available local leader election service. We
use this protocol in our fail-aware group membership service
[11] and fail-aware clock synchronization service for
partitionable systems [11]. We give performance measurements
of our implementation on a network of workstations.
II. Related Work
There are many publications about solutions to the
leader election problem for synchronous and asynchronous
systems [1], [13], [2], [3], [4], [5], [6]. The election problem
was first defined and solved by [1]. Many election algorithms
are based on the "message extinction" principle that
was first introduced by [13]: a process r rejects a message
that requests that a process q should become leader whenever
r knows of a process p that wants to become leader
and p has a lower id than q. Many papers about leader
election do not address the masking of failures during an
election or the election of a new leader when the previous
one fails. We are not aware of any other specification for a
local leader election service for partitionable systems.
Our implementation of an election service contains some
novel aspects. First, instead of using message extinction,
we use an independent assessment protocol [14], [11] that
approximates the set of processes in a stable partition.
Typically, this ensures that only one process l in a stable
partition requests to become leader and all other processes
in l's stable partition support l's election. A local leader
has to renew its leadership in a round-based fashion. This
ensures that the crash of a local leader in a stable partition
results in its replacement within a bounded amount of time.
In a stable partition of N processes the protocol sends one
broadcast message and unicast messages per round.
Second, we use communication by time to ensure that logical
partitions never overlap: we use a mechanism similar
to that of a lease [15] to make sure that a processes is at
any point in time in at most one logical partition.
A protocol option allows us to use the same protocol to
elect either local leaders or one global leader: the protocol
can be forced to create only logical partitions that contain
a majority of the processes. Since logical partitions never
overlap, at any time there can exist at most one majority
partition and thus, at most one global leader in the system.
While a local leader can be used to maintain consistency
amongst the processes in a partition, a global leader can
be used to maintain consistency amongst all processes. For
example, a local leader can be used to ensure mutual exclusion
between the processes in one partition while a global
leader can be used to ensure mutual exclusion between all
processes.
Some group membership services for partitionable systems
[16], [17], [18] can be used to elect local leaders. For
example, the strong membership protocol of [16] or the
three round protocol of [18] can be used to elect local leaders
such that each local leader is in a disjoint partition.
However, a local leader service can be said to be more
"basic" than a membership service in the sense that (1)
a membership service for partitionable systems typically
elects local leaders that create new groups (e.g. see [16]),
and (2) an implementation of a local leader service does not
need the stronger properties provided by a group membership
service such as an agreement on the history of groups.
III. Timed Asynchronous System Model
The timed asynchronous system model [19] is an abstraction
of the properties of most distributed systems encountered
in practice, built out of a set of workstations connected
by a LAN or WAN. The timed model makes very
FETZER AND CRISTIAN: A HIGHLY AVAILABLE LOCAL LEADER ELECTION SERVICE 3
few assumptions about a system and hence, almost all practical
distributed systems can be described as timed asynchronous
systems. Since it makes such weak assumptions,
any solution to a problem in the timed model can be used
to solve the same problem in a practical distributed sys-
tem. The timed model is however sufficiently strong to
solve many practically relevant problems, such as clock
synchronization, highly available leadership, membership,
atomic broadcast and availability management [19].
The timed model describes a distributed system as a finite
set of processes P linked by an asynchronous datagram
service. The datagram service provides primitives to transmit
unicast and broadcast messages. A one-way time-out
delay ffi is defined for the transmission delays of messages:
although there is no guarantee that a message will be delivered
within one-way timeout is chosen
so as to make the likelihood of a message being delivered
within ffi timeouts suitably high [20]. We say that a process
receives a message m in a timely manner iff the transmission
delay of m is at most ffi. When the transmission delay
of m is greater than ffi, we say that m has suffered a performance
failure or that m is late [20].
We assume that there exists a constant ffi min that denotes
the minimum message transmission delay: any message
sent between two remote processes has a transmission
delay of at least ffi min time units. By "remote" we mean
that the message is sent via a network.
The asynchronous datagram service has an omis-
sion/performance failure semantics [20]: it can drop a message
or it can fail to deliver a message in a timely manner,
but the probability that it delivers corrupted messages is
negligible. Broadcast messages allow asymmetric perfor-
mance/omission failures: a process might receive a broadcast
message m in a timely manner, while another process
might receive m late or not at all.
The asynchronous datagram service satisfies the following
requirements:
ffl Validity: when a process p receives a message m from q
at some time t, then indeed there exists some earlier time
sent m to p at s.
ffl No-duplication: a process receives a message m at most
once, i.e. when message m is delivered to process q at
time s, then there exists no other time t 6= s such that the
datagram service delivers m to q at t too.
The process management service defines a scheduling
time-out delay oe, meaning that a process is likely to react
to any trigger event within oe time units (see [19]). If p takes
more than oe time units to react to a trigger event, it suffers
a performance failure. We say that p is timely in an interval
[s; t] iff at no point in [s; t] p is crashed and p does not suffer
any performance failure in [s; t]. We assume that processes
have crash/performance failure semantics [20]: they can
only suffer crash and performance failures. Processes can
recover from crashes.
Two processes are said to be connected [18] in [s; t] iff
they are timely in [s; t] and each message sent between them
in [s; t \Gamma ffi] is delivered in a timely manner (see Figure 3).
We denote that p and q are connected in [s; t] by using the
predicate connected(p,q,s,t).
real-
time
d
-d
-d -d
Fig. 3. Timely processes p, q are connected in [s; t] iff all messages
sent between them in [s; t \Gamma ffi] are delivered within
Processes have access to local hardware clocks with
a bounded drift rate. Correct hardware clocks display
strictly monotonically increasing values. We denote the
local hardware clock of a process p by H p . For simplicity,
we assume in this paper that we can neglect the granularity
of a hardware clock: e.g. a clock has a resolution of 1-s
or smaller. Hardware clocks are proceeding within a linear
envelope of real-time: the drift rate of a correct hardware
clock H p is bounded by an a priori given constant ae so that,
for any interval [s; t]:
An important assumption is that the hardware clock of
any non-crashed process is correct. Informally, we require
that we can neglect the probability that the drift rate of
a hardware clock of a non-crashed is not within [\Gammaae; ae].
Whether some failure probability is negligible depends on
the stochastic requirements of an application [20], [21]. For
non-critical applications, the use of a simple counter connected
to a quartz oscillator and an appropriately chosen
ae provide a sufficiently close approximation of a crash failure
semantics, i.e. one can neglect the probability that any
clock failure except clock crash failures occur. For safety
critical applications, such an implementation might not be
sufficient. However, one can use multiple oscillators and
counters to make sure that the probability of any clock
failure except a clock crash failure becomes negligible [22].
For simplicity, we assume that a hardware clock does not
recover from a crash. Hardware clocks do not have to be
synchronized: the deviation H
hardware clocks H p and H q is not assumed to be bounded.
For most quartz clocks available in modern computers,
the maximum hardware clock drift rate ae is in the order
of 10 \Gamma4 to 10 \Gamma6 . Since ae is such a small quantity, in what
follows we neglect terms in the order of ae 2 or higher. In
particular, we will equate (1
ae). When a process measures the length of an
interval [s; t] by T
(s), the error of this
measurement is within
IV. Communication Partitions
The timeliness requirement of our specification of the local
leader problem will be based on the notion of a stable
partition. However, there are many possible and reason-
4 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, TO APPEAR IN 1999
able definitions of a stable partition because one can put
different constraints on the kind of communication possible
between the processes in two stable partitions. The
strongest definition would require that no communication
be possible between two stable partitions (see the definition
of the predicate stable in [18]) while the weakest definition
would not place any constraints on the communication between
partitions. The timeliness requirement of the local
leader problem will demand that a local leader be elected
in any stable partition within a bounded amount of time.
Therefore, the weaker the definition of a "stable partition"
is, the stronger will be the timeliness requirement since a
protocol is required to elect a leader under harder condi-
tions. In this paper , we use a formalization of the notion
of a "stable partition" which is in between the above two
extremes: a \Delta-partition (see below). The protocol we propose
is based on \Delta-partitions.
The definition of the local leader problem is largely independent
of the actual definition of a stable partition.
However, we assume that all processes in a stable partition
are connected. We introduce a generic predicate sta-
blePartition that denotes the definition that is assumed by
an implementation of a local leader service: stableParti-
tion(SP,s,t) is true iff the set of processes SP is a stable
partition in interval [s; t].
In this section, we introduce one possible definition of a
stable partition. Let us motivate it by two LANs connected
by a network (see Figure 4): the processes that run in
one LAN can communicate with the processes in the other
LAN via a network. When the network provides a fast
communication between the two LANs, we want to have
one leader for both LANs. Since the network can become
overloaded, processes in the two LANs can become logically
disconnected in the sense that the communication between
them is too slow for having only one leader for the two
LANs. In that case, we want to have a local leader in each
of the two LANs.
overloaded network
communication
slow
connected k
l
Fig. 4. Two LANs are connected by a network. The processes in the
two LANs become partitioned when the network fails or is "too
slow".
Two processes p and q are disconnected [18] in an interval
[s; t] iff no message sent between these two processes
arrives during [s; t] at its destination. In this paper , we introduce
a weaker predicate that we call "\Delta-disconnected".
The intuition behind this predicate is that we can use a fail-
aware datagram service [23] to classify messages as either
"fast" or "slow". The service calculates an upper bound
on the transmission delay of each message it delivers. If
this bound is greater than some given \Delta, the message is
classified as "slow" and otherwise, the message is classified
as "fast". Constant \Delta is chosen such that the calculated
upper bound for messages sent between two connected pro-
cesses, i.e. these messages have a transmission delay of at
most ffi, is at most \Delta. One has to choose \Delta ? ffi since
one can only determine an upper bound and not the exact
transmission delay of a message.
To be able to calculate an upper bound on the transmission
delay of each message it delivers, the fail-aware datagram
service [23] maintains for each process q an array TS
such that TS contains for each process p the receive and
send time stamps of some message n that q has received
from p (see [23] for details). The fail-aware datagram service
piggy-backs on each unicast message m it sends from
q to p the time stamps of n and the send stamp of m.
The computation of the upper bound for m uses the time
stamps of the round-trip (n; m): the transmission delay of
m is not greater than the duration between p sending n and
receiving m since q had received n before it sent m. One
can use several techniques to improve this upper bound
such that it becomes close to the real-transmission delay of
m. The upper bound calculation is similar to the reading
error computation in probabilistic clock reading [24].
A process p is \Delta-disconnected from a process q in a
given time interval [s; t] iff all messages that p receives
in [s; t] from q have a transmission delay of more than
time units (see Figure 5). We use the predicate \Delta-
disconnected(p,q,s,t) to denote that p is \Delta-disconnected
from q in [s; t].
real-
time
s
>D -D
Fig. 5. Process p is \Delta-disconnected from q in [s; t] because all messages
that p receives from q in [s; t] have a transmission delay of
more than \Delta. Note that q might receive messages from p with a
delay of less than \Delta.
We say that a non-empty set of processes SP is a \Delta-
partition in an interval [s; t] iff all processes in SP are
mutually connected in [s; t] and the processes in SP are
\Delta-disconnected from all other processes (see Figure 6):
V. Specification
In this section, we derive a formal specification for the
local leader problem. The main goal of a local leader service
is to elect one local leader per stable partition. How-
ever, the specification has also to constrain the behavior of
FETZER AND CRISTIAN: A HIGHLY AVAILABLE LOCAL LEADER ELECTION SERVICE 5
D-partition {n,o}
-partition {p,q,r}
nected
connected
disconnected
Fig. 6. All processes in a \Delta-partition can communicate with each
other in a timely manner. All messages from outside the partition
have a transmission delay of more than \Delta.
processes that are not in stable partitions. Our approach
to this problem is that we require that a local leader service
creates logical partitions such that (1) in each logical
partition there is at most one leader, and (2) each stable
partition is included in a logical partition.
The local leader election problem is defined using three
predicates and one constant all of which have to be instantiated
by each implementation of a local leader service: sta-
blePartition, Leader, supports and constant -. The predicate
\Delta-partition(SP,s,t) defined in Section IV is one possible
definition of a stablePartition. The predicate Leader p (t)
is true iff process p is a local leader at time t. Our specification
is based on the idea that a process p has to collect some
support (e.g. votes) before p can become leader. A vote
of a process q for process p can have a restricted lifetime,
i.e. q can say that its vote for p is only valid for a certain
amount of time. Our specification of the local leader problem
is independent of the actual way a protocol implements
the voting. We achieve this by introducing the notion of a
process q supporting some process p. By "q supports p at
time t" we mean that q has voted for p's election as a local
leader and this vote is still valid at time t. Formally, this
is expressed by the predicate supports t (p,q), defined to be
true iff p supports q's election as local leader at time t.
We will require that at any point in time a process support
at most one process and that a local leader l be supported
by all processes that are connected to l. In particu-
lar, a leader p in a stable partition SP must be supported
by all processes in SP , since all processes in a stable partition
are by definition connected. We will define the predicates
Leader and supports associated with the proposed
local leader election protocol in Section IX.
The specification of the local leader problem consists of
four requirements: (T, SO, BI, LS). The timeliness requirement
(T) requires that in any stable partition a local leader
be elected after at most - time units (see Figure 7). To allow
a local leader service to implement a rotating leader
schema (see Figure 8), after a first local leader is elected
in a stable partition, we do not require that this process
stays a local leader as long as the partition remains stable.
Instead, we require that in any interval of length - in which
a set of processes SP is a stable partition, there exist at
least one point in time at which there exists a local leader
in SP .
The timeliness requirement (T) can formally be ex-
real-
time
true
false
leader
stable partition {p,q,r}
Fig. 7. When a stable partition fp; q; rg forms at time t \Gamma -, it is
guaranteed that a local leader p is elected within - time units.
pressed as follows.
(T) When a set of processes SP is a stable partition in an
there exists a process p 2 SP and a
time so that p is local leader at time s:
real-
time
t u
leader r leader
leader
s
leader
Fig. 8. A rotating leader schema transfers the leadership periodically
between the processes. This schema has been proven useful in the
implementation of atomic broadcast services.
We require that at any point in time a process r support
at most one process. Formally, we state the "Support at
most One" requirement as follows:
(SO) For any time t, if a process r would support a process
p at t and a process q at t, then p and q are the same
process:
connected local leader illegal (SO)
r
r
r
support by (BI)
support by (LS)
r
r
Fig. 9. Requirement (SO) requires that a process support at most
one process at a time and (LS) that a leader supports itself. Thus,
in the trio fp; q; rg there can exist at most one leader whenever
(SO,LS,BI) is satisfied.
We already mentioned that when there is a trio of three
processes p, q, and r such that p; q and q; r are connected
(see
Figure
2), we want that at most one of the three
processes be a local leader. We therefore introduce two
more requirements: the "leader self support" requirement
(LS) and the "bounded inconsistency" requirement (BI).
Requirement (BI) requires that when a local leader p is
6 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, TO APPEAR IN 1999
connected to a process q for at least - time units, q must
support p. In other terms, a process can be connected
to two leaders for at most - time units (bounded incon-
sistency). For example, when two stable partitions merge
into a new stable partition, after - time units there is at
most one local leader in the new stable partition.
(BI) When a process p is local leader at time t and p has
been connected for at least - time units to a process q, then
supports p at t:
A timely process is always connected to itself. Hence,
requirement (BI) implies that a timely local leader has to
support itself within - time units of becoming local leader.
We strengthen this special case by requiring that any local
leader have always to support itself; in particular, a local
leader l has to support itself as soon as it becomes local
leader and even when l is slow. We show in Section VI
how requirements (LS) and (SO) ensure that there is at
most one local leader in each logical partition.
(LS) A local leader always supports itself:
Let us explain why the three requirements (SO,LS,BI)
imply that in a trio fp; q; rg with p,q and q; r are connected
for at least - time units, there must be at most one leader
(see
Figure
9). If p and q were leaders at the same time,
would have to support itself (LS) and p would have to
support the local leader q because p and q are connected
(BI). However, p is only allowed to support one process at
a time (SO). Thus, p and q cannot be leaders at the same
point in time t. If p and r would be leaders at the same
time, q would be required to support p and r (BI). This
would again violate requirement (SO). Therefore, at most
one process in fp; q; rg can be leader.
r
l
supports
created by
leader
local
logical
partition
Fig. 10. The supports predicate partitions the set of processes.
VI. Logical Partitions
We now show how the supports predicate creates logical
partitions and that each of these partitions contains at
most one leader. Furthermore, each leader in a stable partition
SP is in a logical partition LP that contains SP , i.e.
Intuitively, a logical partition LP that contains
a process p contains each process q for which there exists
a finite, undirected path in the supports-graph between p
and q (see Figure 10). By undirected we mean that the
path ignores the "direction" of the supports predicate.
Formally, we define logical partitions with the relation
SUPPORT t that is the reflexive, symmetric, and transi-
l
closure
supports
r
logical
partition
r
l
leader
local
Fig. 11. The SUPPORT t -relation is the reflexive, symmetric, and
transitive closure of supports t . The closure creates fully connected
subgraphs that are isolated from each other.
tive closure of supports t (see Figure 11). By definition,
SUPPORT t is an equivalence relation that partitions the
set of processes in completely connected subgraphs. We say
that two processes p and q are in the same logical partition
at time t iff SUPPORT t (p; q) is true. Since SUPPORT t
is reflexive, each process is in a logical partition. Two logical
partitions LP 1 and LP 2 are either non overlapping or
they are equal because of the symmetry and transitivity of
SUPPORT t .
supports
local leader
exluded by
condition (LS)
Fig. 12. By requiring that a leader supports itself, we guarantee that
there is no undirected path in the supports-graph that contains
more than one local leader.
Let us show that in each logical partition LP there is
at most one local leader. The intuition is that by requiring
that a local leader supports itself, we split any path or
cycle with two local leaders (that could exist in a supports-
graph that does not satisfy (LS)) into two paths with one
local leader each (see Figure 12). More precisely, we can
prove by contradiction that there exists at most one leader
per logical partition. To do so, let us assume that there
would exist a time t and a logical partition LP that contains
two local leaders p and q at t (see Figure 13). Since
by definition SUPPORT t (p; q) holds. There-
fore, there has to exist a finite, undirected path UP in the
supports-graph between p and q. Since at any time a process
supports at most one process (SO) and a leader has
to support itself (LS), p and q are at the two ends of UP
such that there exist two processes k and l in UP that
support p and q, respectively. Processes k and l have to
be supported themselves by two other processes because
of (SO). This argument can be applied recursively to show
that the path UP would have to have an infinite length.
Hence, there cannot exist a finite path between two leaders
and therefore a logical partition contains at most one local
FETZER AND CRISTIAN: A HIGHLY AVAILABLE LOCAL LEADER ELECTION SERVICE 7
leader.
l
local leader
due to (LS)
excluded by
condition (SO)
supports
Fig. 13. When two leaders p and q would be in the same logical
partition there would exist a finite undirected path between p
and q. Since p and q have to be at the two ends of that path
by (LS) and (SO), there would have to exist a process o that
supports two processes.
Since all processes in a stable partition are mutually con-
nected, (BI) implies that within - time units after a stable
partition SP has formed, any local leader in SP is supported
by all processes in SP (see Figure 14). Thus, after
time units there is at most one leader per stable partition
because at any point in time a process supports at most
one process (SO).
stable partition {p,q,r}
leader
local
stable partition {n,o}
supports
partition
stable
Fig. 14. A leader in a stable partition that has formed at least -
time units ago is supported by all processes in its partition.
When there exists a stable partition
has formed no later than time local leader in
SP at time t, all processes in SP support p (see Figure 15).
Hence, between any two processes u, v 2 SP there exists
an undirected path (u; p; v). This implies that u and v are
in the same logical partition. Note that a logical partition
LP can be a strict superset of a stable partition SP : it is
allowed for a process n outside of SP to support a process
in SP and hence, n can be in the logical partition LP that
contains SP .
partition
stable
local
leader
supports
r
logical
partition
l
Fig. 15. The logical partition fp; q; ng is a strict superset of the
stable partition fp; q; rg.
When a stable partition SP forms at time t\Gamma-, then for -
time units there could exist more than one local leader because
leaders from previous partitions have to be demoted
first (see Figure 16). Note that even though there might
exist more than one local leader in a stable partition for a
bounded amount of time, each of these local leaders is in a
logical partition with no other local leader.
real-
time
false
false
leader true
true
leader
Fig. 16. When two stable partitions merge into a new partition,
there might exist multiple local leaders in the same stable parti-
tion. The duration of such an inconsistent behavior is however
bounded by -.
Requirement BI demands that a local leader p be supported
by all processes that are connected to p for at least
for units. This requirement ensures that two parallel
local leaders p and q are only elected when there is
some good reason for that (see Figure 17): the supporters
of p cannot be connected to q (for longer than -) and the
supporters of q are not connected to p (for longer than -).
supports
partition
logical
local leader
connected
exclude by (BI)
Fig. 17. Requirement (BI) prohibits that a supporter of a local leader
p has been connected to another local leader q for more than -
time units.
VII. Protocol Overview
The idea of the proposed protocol for a local leader election
service is the following. The processes send all messages
with a fail-aware datagram service [23] that classifies
all messages it delivers as either "fast" or "slow". Each
process p uses an independent assessment protocol [11] to
approximate the set of processes in its \Delta-partition by a set
that we call aliveSet p : this set contains the processes from
which p has "recently" received a fast message. This independent
assessment protocol does not send any messages.
It uses the messages sent by other services like the local
leader election service. To update the alive-sets, it stores
for each process pair p and q the receive time stamp of the
most recent fast message that q has received from p.
A process p that has an id smaller than the ids of all
other processes in its alive-set broadcasts by sending pe-
8 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, TO APPEAR IN 1999
riodic "Election"-messages to request that the other processes
support its election as local leader. A process q only
supports the election requests of a process p if p is the process
with the smallest id in q's aliveSet q . When a process
q is in a \Delta-partition SP for a "sufficiently long" time, its
alive-set contains at most processes from SP because all
messages it receives from processes outside of SP are slow
and hence, are not sufficient to keep these processes in q's
alive-set. In particular, the process
broadcasting election messages since p's id will be smaller
than the id of any other process in its alive-set. Since each
process q 2 SP is connected to p, q receives p's election
messages as "fast" messages and includes p in its alive-set.
Process p's id is smaller than any other id in q's alive-set,
i.e.
because q's alive-set contains at most processes from SP
and it does contain p due to p's periodic election messages.
Thus, all processes in SP implicitly agree to support the
election of p and they all reply to p's election messages.
Thus, p includes all processes in SP in its alive-set, i.e.
When p gets fast "supportive replies" by all processes in
its aliveSet p , it becomes leader. Since all processes in SP
support p's election, process p succeeds in becoming leader.
When a new \Delta-partition SP forms at time t, all processes
in SP have to support the same process
within - time units. A process q supports the election
of any process only for a bounded amount of time: this
enables q to support another process within a bounded
amount of time. For example, when q supports a process r
at t, it will support another process p only after its support
for r expires. When a process r becomes leader with the
support of q, we have to ensure that r is demoted before
the support by q expires. The protocol achieves this timely
demotion even when r is slow: r is only leader as long as its
hardware clock shows a value smaller than some value expi-
rationTime that is calculated during r's election. Note that
q can support a different process p after its support for r
expired without any further exchange of messages between
q and r. This is an important property because q does
not have to learn that r has been demoted or has crashed
(which would be impossible to decide in an asynchronous
system!) before it supports a different process.
Our protocol does not require the connected relation to
be transitive for the successful election of a local leader
(see Section X). The aliveSet of a process p approximates
the set of processes connected to p. If process p gets the
support of all processes in its aliveSet, p can become leader
even when no two other processes in its aliveSet are connected
VIII. Protocol
The pseudo-code of the protocol for a local leader election
service is given in Figures 30 and 31. All messages
are sent and delivered by a fail-aware datagram service
[23]. This allows a receiver of a message m to detect when
the transmission delay of m was "fast" or "slow", i.e. at
most units or more than \Delta time units, respec-
tively. The fail-aware datagram service provides a primitive
to send unicast messages (denoted by fa-Unicast) and
one to broadcast messages (denoted by fa-Broadcast). To
deliver a message m at clock time recTime that was sent
by process p to q, the service generates an event denoted
by fa-Deliver(m,p,fast,recTime) at the destination process
q. The boolean flag fast is set to true when m was fast,
otherwise, it is false.
Each process p maintains a set of processes (called
aliveSet p ) from which p has received a "fast" message in
the last, say, expires clock time units. We will determine
the value of the constant expires later on. The alive-set of
a process is maintained by two procedures: UpdateAliveSet
and PurgeAliveSet. The procedure UpdateAliveSet inserts
the sender of a fast message m into aliveSet and stores the
receive time stamp of m in an array called lastMsg. The
procedure PurgeAliveSet uses array lastMsg to remove a
process r from aliveSet myid of the executing process myid,
if process myid has not received a fast message from r for
more than expires clock time units.
Let us assume that the ids of processes are totally or-
dered. Function PurgeAliveSet also returns the first point
in clock time (given with respect to the local hardware
clock) when myid could be smaller than the id of any other
process in aliveSet. When the process does not receive
any more messages from processes with smaller ids, the returned
bound is tight. The calculated bound is used to
optimize the broadcasts of Election-messages: a process p
starts broadcasting periodic Election-messages as soon as
its id becomes smaller than the id of any other process in
its alive-set. When p is timely, it broadcasts at least every
EP clock time units an Election-message, where EP (elec-
tion period) is an a priori given constant. The scheduling of
the broadcasts is performed with the help of an alarm clock
that we call the aliveAC. A process p can set an alarm clock
with the method SetAlarm: SetAlarm(T) requests that a
"WakeUp" event be generated just after p's hardware clock
shows value T . A timely process p is awakened within
time units of its hardware clock H p showing T . The
scheduling timeout delay oe is an a priori defined constant.
A process can cancel an alarm by operation "Cancel".
An Election-message sent by process p contains
ffl a unique time stamp (denoted request) to detect replies
that correspond to this request of p to become leader, and
ffl the alive-set of p at the time when p's hardware clock
showed value request.
When a process p sends an Election-message, it stores
the request time stamp in a variable called lastRequest to
be able to identify replies for the message. Process p also
stores its current alive-set in a variable targetSet and resets
the variable replySet to the empty set: p will insert into its
replySet the ids of all processes from which p gets a fast
reply that supports p's election.
A process q replies to all fast Election-messages and ignores
all slow messages. A "Reply"-message m identifies
the request that q is replying to and also contains a flag
that indicates if q is supporting p's election. When the
FETZER AND CRISTIAN: A HIGHLY AVAILABLE LOCAL LEADER ELECTION SERVICE 9
flag is true, we say that m is a "supportive" reply and
by sending m, q guarantees not to send another supportive
reply in the next lockTime clock time units. Process
q achieves this guarantee by storing p's id and the request
id in a variable LockedTo and the sum of the receive time
of m plus lockTime in a variable LockedUntil. When other
Election-messages arrive, q can use the variables LockedTo
and LockedUntil to determine if it is still supporting p (i.e.
q is still "locked" to p).
We will determine the value of the constant lockTime in
Section IX. We say that a process q "is locked to process
to denote that q cannot send a supportive
reply to any other process than p (at t). When q sends a
supportive reply to p (at t), we say that "q locks to p (at
t)".
Let us consider that process q receives an Election-
message m from p containing the time stamp request and
p's alive-set (denoted by alive). Process q only locks to p if
ffl m is a fast message: q has to ignore all election requests
from outside of its \Delta-partition,
ffl q is not locked to any other process: this ensures that at
any point in time a process is locked to at most one process,
ffl p's id is smaller than q's id, and
ffl p is the process with the minimum id in q's alive-set:
when two processes r and q are in the same \Delta-partition
SP , they implicitly agree on which process to support since
- as we explain below - the following condition holds:
min(aliveSet r )=min(aliveSet q )=min(SP).
When process p receives a fast supportive reply from a
process q to its last Election-message, p inserts q into its
replySet. After 2\Delta real-time units (which could be up to
units due to the drift of p's hardware
checks if it has become leader or has renewed its
leadership (by calling function CheckIfLeader). In case p
has already been leader, p tests if it has been able to renew
its leadership as soon as p has received replies from all processes
in its targetSet, i.e. as soon as replySet=targetSet. A
process p only becomes leader (see function CheckIfLeader)
iff
ffl p has been in its own alive-set at the time p has sent the
Election-message (see proof of requirement (BI) in Section
IX),
ffl p has received a fast supportive reply from all processes
in its current alive-set: this is necessary to make sure that
p is supported by all processes that p is connected to, and
ffl p has the minimum id in replySet p : this makes sure that
supports itself.
When p becomes leader, it sets its local variable im-
Leader to true and calls a procedure newLeader to notify
its client that it is leader until p's hardware clock shows
some clock value expirationTime (to be determined in Section
IX). Process p schedules the broadcast of its next
Election-message at time expirationTime-2\Delta(1+ ae) so that
it can renew its election before its leadership expires. The
protocol provides a Boolean function Leader? that checks
if the calling process p is a local leader: when the function
Leader? reads its hardware clock at time t, it is leader at
and the flag imLeader is true.
In case p does not succeed to become local leader and
has received at least one supportive reply, p broadcasts
a "Release"-message to let all processes that are locked to
know that they can lock to another process. When a
process q receives a Release-message from p for the last
election message that q has locked to, q releases its lock.
IX. Correctness
We show in this section that the proposed local leader
protocol satisfies the requirements (T,SO,BI,LS). To do
this, we first have to define the predicates stableP artition,
Leader p and supports t . Since our protocol is designed for
\Delta-partitions, we define
\Delta-partition(SP,s,t).
Let us denote the value of process p's variable varname at
time t by varname t
. The predicate Leader p is defined by,
Leader
.
Before a process becomes leader, it sets its variable sup-
portSet to the set of processes that have sent a supportive
reply. A process q supports a process p at time t iff p is
leader at time t and q is in p's support set:
supports t (q; p)
Formally, we express the property that process q is locked
to process p at t by a predicate locked t
q (p) that is defined as
follows (see Section VIII and Figure 31 for an explanation
of variables LockedTo and LockedUntil):
locked t
q .
As long as q does not crash, it is locked to at most one
process at a time because q always checks that its last lock
has expired before it locks to a new process. Recall that the
timed asynchronous system model allows crashed processes
to recover. We assume that when a process q crashes, it
stays down for at least lockT ime clock time units. This
enables q to lock to a process p immediately after q recovers
from a crash without waiting first for at least lockT ime
clock time units to make sure that any lock q had issued
before it crashed has expired. Note that this initial waiting
time would otherwise (i.e. without the above assumption)
be required to make sure that a process is locked to at most
one process.
To simplify our exposition, we use the phrase "process
does action a at clock time T ." to denote that "pro-
cess p does action a at some point in real-time t when p's
hardware clock H p shows value T , i.e. H p
A. Supports At Most One (SO)
To see that requirement (SO) is satisfied, i.e. that a
process q supports at most one process p at a time, let us
consider the scenario shown in Figure 18. Process p sends
an Election-message m at clock time lastRequest p , q
receives m at T , and q sends a supportive reply at U . Process
q is locked to p for at least lockT ime clock units, i.e.
until its hardware clock shows a value Y
When process p becomes leader at time W and q is in p's
support set, it assigns its variable expirationT ime p the
clocks can drift apart by at most 2ae (and p's hardware clock
shows S before q's hardware clock shows T due to the positive
transmission delay of m), p's hardware clock shows
value expirationT ime p before q's hardware clock shows
ime (see Figure 18). Hence, whenever a process
q supports p, q is locked to p. Since q is locked to at
most one process at a time, it follows that q supports at
most one process at a time.
Y
Leader
lockTime
lockTime
Election-Message Reply-message
-D
-D
Fig. 18. A process q that sends a supportive reply n to p's Election-
message m is locked to p for lockTime clock units, while p is
demoted before q releases its lock to p.
B. Leader Self Support (LS)
Before becoming leader, a process p checks that it has received
a supportive reply from itself (see condition
replySet.Min() in procedure CheckIfLeader). Thus, a leader
always supports itself and the protocol therefore satisfies
requirement (LS).
C. Timeliness (T)
For the timeliness condition (T) to hold, we have to make
sure that the protocol constants -, expires and lockTime
are well chosen. Constant expires states for how long a
process p stays in the in the aliveSet of a process without
the arrival of a new fast message from p. Hence, we have to
derive a lower bound for expires to make sure that processes
in a \Delta-partition do not remove their local leader from their
aliveSet.
To derive a lower bound for constant expires, let us consider
the situation in which a timely process p tries to
become leader by sending periodic Election-messages (see
Figure
19). The goal is that p stays in the alive-set of each
process q that is connected to p and q stays in the alive-
set of p. Therefore, the constant expires has to be chosen
such that for any two successive Election messages m 1 and
sent by p and that q receives at clock times S and T ,
respectively, condition expires holds. Similarly,
when p receives q's replies n 1 and n 2 at times U and V ,
respectively, the distance between these two receive events
should be at most expires: expires. We therefore
derive upper bounds for under the
assumption that p and q are connected. The duration between
two successive broadcasts is at most EP clock time
units and the real-time duration is thus at most EP (1+ ae).
The difference in the transmission delays of m 1 and m 2 is
at most
-expires.
Since the clock time between transmitting m 1 and m 2 is at
most EP , and the maximum difference between the round-trip
times expires has
to be bounded by,
d min
-D
Reply-message
Election-messagen
min
(1+r)+D-d
Fig. 19. Constant expires is chosen so that p stays in q's alive-set
between S and T and q stays in p's alive-set between U and V .
Typically one will set the constant expires to a small multiple
of the lower bound so that one late election message
or an omission failure of an Election-message does not remove
the sender from the alive-set of the other processes.
In our implementation we defined the constant as about
four times the lower bound and achieved excellent stability
in the sense that during our measurements the minimum
process was not removed from the alive-sets unless
we crashed the process or partitioned the system.
Constraining -
We will now show that all processes in a \Delta-partition SP
will implicitly agree on the process with the minimum id
in SP . Let us consider that SP forms at time s and stays
stable at least until time y ? s, and that
Figure
20). Any message m that a process q 2 SP receives
from a process r outside of SP , i.e. r 2 has a
transmission delay of more than \Delta. Hence, the fail-aware
datagram service delivers m as a slow message and q will
therefore not update its array lastMsg and its aliveSet
for r. After expires clock time units, that is, after time
outside of SP are removed
from the alive-sets of the processes in SP . Thus, after
smaller than the id of
any other process in its alive-set. Process p's aliveTimer
will generate a "timeout" event no later than time t \Delta
since a process sets its aliveTimer
so that it generates a timeout event within oe time units of
p's id becoming smaller than any other id in its alive-set.
Hence, p will broadcast its first Election-messages m 1 after
s no later than time t. The first election request can fail
because p's target set does not necessarily contain p's own
id. All processes in SP receive a fast m 1 no later than
\Delta. Hence, they all will include p in their alive-sets
no later than time u. After time u no process in SP will
lock to any other process than p since p has the minimum id
in their alive-sets. All processes in SP will reply to p and p
FETZER AND CRISTIAN: A HIGHLY AVAILABLE LOCAL LEADER ELECTION SERVICE 11
will get these replies as fast messages since p is connected
to all processes in SP by our hypothesis. Process p will
therefore include all processes in SP in its alive-set: after
time u+ \Delta, p's alive-set consists of all the processes in SP ,
i.e. aliveSet
Election-msg
s
Reply-message
lockTime
+s
expires
Fig. 20. The process \Delta-partition SP that forms at
s takes up to expires+oe clock time units to send its first Election-
message m 1 and it will send the second at most EP later. We
show that p succeeds to become leader with m 2 even when a
process q would be locked to another process at u.
After time t, p will broadcast at least every EP clock
units an Election-message. Note that p is timely because
p is by our hypothesis in a \Delta-partition. Hence,
sends its next Election-message m 2 no later than time
Figure 20). Below we will constrain
lockT ime so that even when q would be locked to another
process at u, q will have released that lock before it receives
m 2 at w. All processes in SP will send supportive replies
ffl no process in SP will be locked to another process anymore
ffl p has the minimum id in the alive-sets of all processes in
SP , and
ffl all processes in SP receive m 2 as a timely message.
Process p will become leader no later than time v+ 2\Delta. For
our protocol to satisfy requirement (T), we have therefore
to constrain - as follows:
C.2 Constraining lockTime
When a timely process p broadcasts an Election-message
at time s, then p receives the replies of connected processes
within units. The releaseTimer is therefore set
to ae), where the factor of (1 takes care of
the drift of p's hardware clock. To ensure that a process
becomes leader for a positive time quantum, the lockTime
has to be chosen such that p's leadership does not expire
before it actually starts. Since (1) p stays leader for at most
lockT sending its election message m (see
Figure
18), and (2) it takes up to 2\Delta(1
time units until p's releaseTimer timeouts after sending m,
we have to make sure that
lockT
Thus, we assume the following lower bound for lockTime :
lockT ime ?
We now derive an upper bound for constant lockTime.
The goal is that when process q has locked to a process
just before receiving m 1 , q should release its lock
before q receives the next message m 2 from p (see Figure
21). When p does not become leader with m 1 , it schedules
its next request in at most EP clock time units. Due to
the scheduling imprecision, a timely p will actually send m 2
units. The drift rate of p's
hardware clock is within [\Gammaae; +ae] and thus, the real-time
duration between the two send events is at least (EP \Gamma
ae). The real-time duration between the reception of
by q is at least (EP \Gammaoe)(1\Gammaae)\Gamma\Delta+ffi min because
the difference in the transmission delays of m 1 and m 2 is
at most Therefore, constant lockTime should be
at most,
lockTime - (1-ae) [(EP-oe)(1-ae)-\Delta+ffi min
In our protocol we set lockTime to this upper bound, i.e.
we choose lockTime as big as possible for a given EP .
real-
time
d min
message
Election
-D
lockTime
min
-s)(1-r)-D+d
Fig. 21. Process q locks just before receiving m 1 to a process r ? p:
should release its lock before it receives the next Election-
message m 2 from p.
D. Bounded Inconsistency (BI)
To show that the protocol satisfies requirement (BI), we
consider two processes p and q that become connected at
some time s and stay connected until some time y
(see
Figure
22). Let p have the smaller id of the two pro-
cesses, i.e. p ! q. We have to show that for any time x such
that s+- x - y and p is leader at x that q supports p at
x. Let us consider that m 0 is the last Election-message that
p has sent before s, and m 1 is the first that p sends after the
two processes become connected at s. A process p includes
itself in its alive-set only when p receives a timely election
or reply message from itself. The broadcast of m 0 could result
in a timely reply message n 0 to itself. As a consequence
of could include itself in its alive-set after s with a
receive time stamp of up to H does not
send another Election-message m 1 for more than expires
clock time units, p will remove itself from its alive-set. Note
that after p has been removed from its own alive-set, p is
not in the target set and the first election request of p will
fail because p needs to be in its target set to become leader
(see function CheckIfLeader). Hence, when p sends the first
Election-message
this election request will fail. However, q's reply to m 1 will
force p to get a supportive reply for all successive election
requests. In other words, after p receives n 1 , it can only
become leader with the support of q (as long as p and q
stay connected). When p sends its first Election-message
election can be
successful even without the support of q because p checks
that it has become leader as soon as it has received a reply
from all processes in its target set. However, p's leadership
will expire within lockTime(1-2ae) clock time units. Due to
its election without q's support. Thus,
the requirement (BI) is satisfied for
-2\Delta+(1+ae)(expires+lockTime(1-2ae)).
Note that this bound for - is smaller than the previously
derived bound to satisfy the timeliness condition (see Section
lockTime(1-2
r
-D
-D
s
expires
uElection-msg Reply-msg
Fig. 22. Processes p and q become connected at s and p will include
q in its alive-set at time v. Thus, any Election-message p sends
after v requires a supportive reply from q to allow p to become
leader.
X. Protocol Properties
The proposed protocol elects within - time units in any
\Delta-stable partition SP the process with the minimum id
as local leader. Process stay leader as
long as SP stays stable: after p becomes leader, it reduces
its time-outs for broadcasting its Election-messages from
EP to lockTime(1-2ae)-2\Delta(1 + ae). This makes sure that as
long as SP stays stable p can renew its election before its
leadership expires in lockTime(1-2ae).
A local leader p always knows its logical partition LP :
. Note that the definition of supports
that we give for our protocol (see Section IX) states that a
process r only supports another process q if q is leader and
r is in q's support set. Hence, for our protocol a process
r cannot support a process q that is not leader or a local
leader that does not know of r's support. In this way, in our
protocol we actually exclude situations like that depicted
in
Figure
10 in which a logical partition contains processes
that support a process other than the local leader.
Since logical partitions do not overlap, at no point in
time do the support sets of local leaders overlap. The support
set is the basis for our implementation of a membership
protocol [11]. Process p piggy-backs its support set
on its next election message to provide all processes in its
logical partition with the current members in their logical
partition.
The proposed protocol guarantees a stronger timeliness
requirement than the one required by the specification. In0 65712 1411
9connected supports logical partition
Fig. 23. The protocol elects in each maximum connected set at least
the process with the minimum id as local leader.
particular, the connected relation does not have to be transitive
to guarantee that a local leader is elected within -.
We say that a set of processes CS is a maximum connected
set iff
ffl any two processes p and q in CS are either connected or
\Delta-disconnected,
ffl for any two processes p and q in CS there exists a path
in the connected-graph between p and q, and
ffl any process in CS is \Delta-disconnected from all processes
outside of CS.
The protocol succeeds to elect the minimum process in any
maximum connected set within - time units (see Figure
23). The behavior of the protocol for any maximum connected
set CS can be described by a graph algorithm (see
Figure
24). Initially, the set S contains all processes in CS,
no process in CS is marked (i.e. the set MS is empty), and
the set of local leaders LS is empty. The algorithms iteratively
computes the minimum process l in set S. When l
and all processes connected to l are not marked (i.e. they
are not in the set MS), l is included in the set of local leaders
LS. The intuition of a process being 'marked' is that a
marked process has already locked to another process. All
processes that are connected with l are marked by being
included in the set MS. The algorithm terminates when
becomes empty. After the algorithm has terminated, LS
contains exactly the set of local leaders that the proposed
local leader election protocol will elect in the maximum
connected set CS.
The protocol can be configured so that it guarantees that
there exists at most one leader at a time, i.e. the safety
property (S) of the conventional leader election problem is
satisfied. Let N denote the maximum number of processes
that are participating in the election protocol, i.e.
By setting constant minNumSupporters to d N+1
e a process
has to get the support of more than half of the processes to
become leader (see function CheckIfLeader in Figure 31).
Thus, any local leader is in a logical partition with more
than N=2 processes and because logical partitions do not
overlap, there can be at most one leader at a time. The
modified protocol satisfies all requirements (T,SO,LS,BI).
However, we have to define predicate stablePartition in the
following way:
FETZER AND CRISTIAN: A HIGHLY AVAILABLE LOCAL LEADER ELECTION SERVICE 13
MS / ;;
while S 6= ; do
l / min
for all p 6= l:
if connected(l,p) then
MS then
MS
MS
endif
MS then
Fig. 24. This algorithm computes the set of local leaders LS that
the proposed local leader election protocol elects in a maximum
connected set CS.
e.
XI. Performance
We measured the performance of the local leader election
protocol in our Dependable Systems Lab at UCSD, on 8
SUN IPX workstations connected by a 10 Mbit/s Ethernet.
All messages are send by a fail-aware datagram service [23]
that classifies any message it delivers as either "fast" or
"slow". The service calculates an a posteriori upper bound
on the transmission delay of a message m and if this calculated
bound is not greater than some given threshold \Delta,
m is classified as "fast" and otherwise, m is classified as
"slow". The fail-aware datagram service guarantees that
the Validity and the Non-duplication requirements hold
(see Section III).
The threshold \Delta for fast messages was 15ms, the
timeout for the scheduling delay oe was 30ms, and the election
period EP was 50ms. The typical behavior of the protocol
is that exactly one process p is periodically broadcasting
election messages and all other processes are replying
with unicast messages to p. The measured election times,
i.e. the time between transmitting an election message by
a process p and the time at which p becomes leader, reflects
this typical behavior (see Figure 25). These measurements
were based on 100000 successful elections. The election
time increases linearly with the number of processes participating
in the election: the average election time and
the 99% election time, i.e. a process succeeds with a 99%
probability to become leader within that time, are shown
in
Figure
26.
We also measured the time it takes to elect a new leader
when the system splits from one into two partitions (see
Figure
27). The graph is based on about 12000 measurements
and was performed using the leader election protocol
as part of a membership protocol [11]. A process p
removes a process q from its alive-set when p has not received
a fast message from q for more than expires=230ms.
In other words, when the system splits up, it takes the processes
in one partition up to 230ms to remove all processes
3ms 4ms 5ms 6ms 7ms25007500no.
elections
process
6 processes
Fig. 25. Distribution of the election time for 1 to 8 processes participating
in the election. The smaller difference between 1 and
2 processes is due to the fact that no local replies are sent when
there is more than 1
nodes
election
time
Avg. Time T99 Time
Fig. 26. The average election time and 99% election time for 1 to 7
participating processes.
from the other partition from their alive-sets. However,
expires could be reduced to about 50ms (see Section IX-
C). However, we use the larger value of 230ms to minimize
the number of parallel local leaders in case of temporary
instabilities. The first election attempt after the system
becomes partitioned typically fails because the alive-sets of
the processes in each of the two newly formed partitions are
not up-to-date yet. The second election attempt however
does in general succeed to elect a new local leader.
The linear increase of the election time with the number
of participating processes is mainly due to the fact that a
process receives more Reply-messages to its election message
and less to the fact that the Ethernet is overloaded.
One possible enhancement of the protocol would be to use
the alive-set provided in an election message to build an n-ary
tree and use this tree to collate replies (see Figure 28):
(1) a process on the leaves of the tree replies to its parent
node, (2) a process in an inner node waits for the replies
of its children before it replies to its parent node, and (3)
the root becomes leader when it has received replies from
all its children. While reducing the election time, such an
enhancement would complicate the protocol since it has to
handle the case that a process q in the tree crashes or is
too slow and hence, the root would not get any message
from any of the children of q.
14 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, TO APPEAR IN 1999
50ms 150ms 250ms 350ms100300500700900
no.
partition
splits
election time
exclusion time
Fig. 27. The left graph shows the measured time l takes until it
removes all processes from the other partition from its alive-set.
This is the time l waits until it attempts to become local leader.
The right graph shows the measurement of the time needed to
elect a new leader l after a partition split.4 6
reply 0
Fig. 28. The alive-set of the root process 0 is used to define a 3-ary
tree which is used to collate replies of the processes in aliveSet 0 .
Processes outside aliveSet 0 reply directly to 0.
We implemented the tree based collation of replies to
compare its effect on the election time. For a system with
eight processes, three processes directly replied to the new
leader l, while 3 other processes had to reply to another
process q before q replied to l. The improvement for 8 processes
was not sufficient to justify the increased complexity
(see
Figure
29). However, for systems with more processes
this enhancement could decrease the election time significantly
XII. Conclusion
The ideal goal of a local leader service is to elect exactly
one local leader in each communication partition. This
goal is however not always achievable because the failure
frequency in a communication partition might be too high
for the processes in this partition to be able to elect a
local leader. In this paper, we derive a specification which
approximates this ideal goal of having exaclty one local
leader per communication partition. We also show that
this specification is implementable in timed asynchronous
systems, i.e. distributed systems without upper bounds on
the transmission or scheduling delay but in which processes
have access to a local hardware clock with a bounded drift
rate.
6.0ms 6.5ms 7.0ms 7.5ms 8.0ms100300500700
no.
elections
direct replies
collated replies
Fig. 29. Comparison of tree based collation of replies and direct
replies to the leader. The two graphs are based on 10000 elections
The local leader problem requires that an implementation
creates non-overlapping logical partitions and elects
one leader per logical partition. A logical partition created
by our local leader service is a set of processes such that
the local leader of this logical partition can communicate in
a timely fashion with all processes in the logical partition.
If the connected-relation changes, e.g. a process becomes
disconnected from the leader, the local leader service has to
adapt the logical partition to the new connected-relation.
A stable partition is a communication partition in which
all processes are connected with each other. Therefore, a
local leader service has to create for each stable partition
SP a logical partition that includes SP and has to elect a
leader l in SP within a bounded amount of time after SP
has formed.
The specification of a local leader service can efficiently
be implemented in timed asynchronous systems. We introduce
in this paper a round-base local leader election
protocol: a leader is only elected for some maximum duration
and has to update its leadership during the next
round. This periodic update is necessary to be able to
adapt the logical partitions to changes of the connected-
relation. For example, if the current local leader l crashes
or is disconnected, the remaining processes can elect a new
leader within a bounded amount of time since l is demoted
with some known amount of time. In a stable partition of N
processes, the protocol sends one broadcast datagram and
unicast datagrams per round. A local leader service
has been proven useful in the design and implementation
of a fail-aware membership service and a fail-aware clock
synchronization service for asynchronous partitionable systems
FETZER AND CRISTIAN: A HIGHLY AVAILABLE LOCAL LEADER ELECTION SERVICE 15
import
const myid :
const
const
const
const
const ae : time;
const expires : time;
function newLeader(demotedAt: time);
procedure
procedure
const
lockTime
minNumSupporters: integer init
var
releaseAC: alarm clock;
imLeader: boolean init false;
expirationTime: time init 0;
supportSet: set init ;;
aliveSet: set init ;;
replySet: set init ;;
targetSet: set init ;;
lastMsg:
lockedUntil: time init 0;
lastRequest: time init 0;
msg
request : time);
function
noMinBefore
for all p 2 aliveSet:
if now - lastMsg[p]+expires then
else if myid ? p - lastMsg[p]+expires ? noMinBefore then
return noMinBefore;
procedure UpdateAliveSet(sender: P, fast: boolean, recTime: time)
if fast then
lastMsg[sender] / recTime;
endif
procedure CheckIfLeader ()
minNumSupporters then
supportSet / replySet;
imLeader / true;
expirationTime lastRequest
newLeader(expirationTime);
else
if replySet
endif
lastRequest
Fig. 30. Part 1 of the pseudo-code for a local leader service.
return
task LeaderElection
import
recTime
begin
loop
select event
when fa\Gammadeliver(("Election", request, alive),
sender, fast, recTime):
UpdateAliveSet(sender, fast, recTime);
support
if support then
endif
if fast then
if sender 6= myid - alive.Size() - 1 then
sender, recTime);
else if support - request = lastRequest then
when fa\Gammadeliver(("Release", request), sender,
fast, recTime):
when fa\Gammadeliver(("Reply", request, support),
sender, fast, recTime):
UpdateAliveSet(sender, fast, recTime);
if fast - request = lastRequest - support then
CheckIfLeader ();
endif
endif
when aliveAC.WakeUp(T):
lastRequest
replySet / ;;
targetSet / aliveSet;
lastRequest,
else
when releaseAC.WakeUp(T):
CheckIfLeader ();
select
loop
Fig. 31. Part 2 of the pseudo-code for a local leader service.
XIII.
Appendix
Sym. Sec. Meaning
aliveSetp VII independent assessment view of p
bounded inconsistency requirement
connected III timely message exchange possible
one-way time-out delay
delay
threshold used to classify messages
\Delta-disconnected IV a weak form of being disconnected
\Delta-partition IV a stable partition
EP VIII election period
expires VIII expiration time of alive-set entries
fast IV upper bound of a fast message - \Delta
Hp III hardware clock of process p
is leader at t
lockTime VIII support time for leader
P III set of processes
p,q,r processes
ae III maximum drift rate of a hardware clock
s; t; u; v real-time values
oe III scheduling time-out delay
slow IV upper bound of a slow message ? \Delta
"Support at most One" requirement
stablePartition V formalization of a stable partition
supports V supports-relation
SUPPORT VI closure of supports-relation
timeliness requirement
--R
"Distributed systems - towards a formal approach,"
"Elections in a distributed computing sys- tem,"
"Election in asynchronous complete networks with intermittent link failures.,"
"Optimal asynchronous agreement and leader election algorithm for complete networks with byzantine faulty links.,"
"Design and analysis of dynamic leader election protocols in broadcast networks,"
"Leader election in the presence of link failures,"
"Reaching agreement on processor-group membership in synchronous distributed systems,"
"The transis approach to high availability cluster communication,"
"Totem: A fault-tolerant multicast group communication system,"
"The process group approach to reliable distributed computing,"
"A fail-aware membership service,"
"Fail-aware clock synchronization,"
"An improved algorithm for decentralized extrema-finding in circular configurations of processes.,"
"The Tandem global update protocol,"
"Leases: An efficient fault-tolerant mechanism for distributed file cache consistency,"
"Processor group membership protocols: Specification, design and implementa- tion,"
"Processor membership in asynchronous distributed systems,"
"Agreeing on processor-group membership in asynchronous distributed systems,"
"The timed asynchronous distributed system model,"
"Understanding fault-tolerant distributed systems,"
"Failure mode assumptions and assumption coverage,"
"Building fault-tolerant hardware clocks,"
"A fail-aware datagram service,"
"Probabilistic clock synchronization,"
--TR
--CTR
Andrew Hume , Scott Daniels, Ningaui: A Linux Cluster for Business, Proceedings of the FREENIX Track: 2002 USENIX Annual Technical Conference, p.195-206, June 10-15, 2002
Flaviu Cristian , Christof Fetzer, The Timed Asynchronous Distributed System Model, IEEE Transactions on Parallel and Distributed Systems, v.10 n.6, p.642-657, June 1999
Christof Fetzer , Flaviu Cristian, Fail-Awareness: An Approach to Construct Fail-Safe Systems, Real-Time Systems, v.24 n.2, p.203-238, March
Ying Zhao , WanLei Zhou , JiuMei Huang , Shui Yu , E. J. Lanham, Self-adaptive clock synchronization for computational grid, Journal of Computer Science and Technology, v.18 n.4, p.434-441, July
Richard John Anthony, An autonomic election algorithm based on emergence in natural systems, Integrated Computer-Aided Engineering, v.13 n.1, p.3-22, January 2006 | global leader election;local leader election;timed asynchronous systems;partitionable systems |
325398 | Experimenting with Quantitative Evaluation Tools for Monitoring Operational Security. | AbstractThis paper presents the results of an experiment in security evaluation. The system is modeled as a privilege graph that exhibits its security vulnerabilities. Quantitative measures that estimate the effort an attacker might expend to exploit these vulnerabilities to defeat the system security objectives are proposed. A set of tools has been developed to compute such measures and has been used in an experiment to monitor a large real system for nearly two years. The experimental results are presented and the validity of the measures is discussed. Finally, the practical usefulness of such tools for operational security monitoring is shown and a comparison with other existing approaches is given. | Introduction
Security is an increasing worry for most computing system administrators: computing systems
are more and more vital for most companies and organizations, while these systems are
made more and more vulnerable by new user requirements and new services (groupware and
other information sharing facilities; interconnection to insecure networks; powerful applications
whose complexity may hide serious security flaws; etc. On the other side, for most users
of current computing systems, security is not a main concern and they are not prepared, for the
sake of security, to waive their system ease of use or to give up information sharing facilities.
In such conditions, it is difficult to reach an acceptable degree of security, since users play an
important role in the computing system security: even the best system, designed for the highest
security, would be insecure if badly operated by casual users and a lax use of the most efficient
protection mechanisms would introduce flaws that could be exploited by possible attackers.
Thus, one of the main tasks of most computing system administrators is to negotiate with
system users to make them change their careless behavior and improve the system security.
And this is not an easy job: why would a user renounce his bad habits, if he considers that he
does not own sensitive data or applications? It may be difficult for him to understand that the
flaws he introduces in the system are endangering other user accounts, possibly with more sensitive
information. The set of tools here presented aims at facilitating this administrator's
* To appear in Proc. of the 6th IFIP Working Conf. on Dependable Computing for Critical Applications
(DCCA-6), Garmish-Partenkirchen, Germany, March 5-7 1997, IEEE Computer Society Press.
task: by providing a quantitative assessment of the current system security level, these tools can
help him to identify those security flaws which can be eliminated for the best security improvement
with the least incidence to users. Such quantitative evaluation tools should also enable
him to monitor the evolution of the global system security with respect to modifications of the
environment, of the configurations, of the applications or of the user behavior.
The measurements delivered by the evaluation tools should represent as accurately as possible
the security of the system in operation, i.e. its ability to resist possible attacks, or
equivalently, the difficulty for an attacker to exploit the vulnerabilities present in the system and
defeat the security objectives. Several characteristics can be deduced from these definitions:
. The security measure characterizes the security of the system itself, independently of
the threats it has to face up to: the system is the same (and its security measure should
be the same) whether there are many or few potential attackers with high or low competence
and tenacity. But of course, a given system (with a given security measure) will
be more probably defeated by many competent, tenacious attackers than by few lazy
. The security measure is directly related to security objectives: a system is secure as long
as its main security objectives are respected, even if it is easy to perform illegitimate actions
which do not defeat the objectives. For instance, a system can be secure even if it
is easy for an intruder to read some public information.
. The security measure should evolve according to system modifications influencing its
security: any modification can bring new vulnerabilities and/or correct previous ones
and the security measure should be sensitive to such modifications. The main use of
such measures is to monitor security evolution of a given system rather than rate absolutely
the security of different systems: it is more important to know if the security of a
given system is improving or decaying than to compare the security of independent sys-
tems, with different objectives, applications, users, environments, etc.
A theoretical framework has been developed at LAAS to identify and compute such measures
[Dacier, Deswarte et al. 1996a, Dacier 1994]. This framework is based on: 1) a theoretical
model, the privilege graph, exhibiting the system vulnerabilities, 2) a definition of the security
objectives, a mathematical model based on Markov chains to compute the security measures.
To demonstrate the practical feasibility of the approach, this theoretical framework has been
implemented by a set of software tools which can compute the security measures of large Unix
systems.
But a major question is raised by such an approach: what is the validity of the obtained
measures to represent accurately the system security? In this domain, no direct validation is ex-
pected: real attacks on real systems are too rare for a precise correlation to be obtained between
the computed measures and a success rate of attacks; even a tiger team approach would probably
be inefficient since such attacks are not necessarily representative of real attacks, and
because for a good accuracy, tiger team attacks must be numerous on a stable system [Olovs-
son, Jonsson et al. 1995] while our measures are intended to rate the dynamic evolution of the
system security. So the only practical validation is experimental: we have chosen to observe the
security measures computed on a real full-scale system during a long period and to analyze
each significant measure change with respect to the events triggering these changes.
This paper presents this experiment. Section 2 presents a short description of the theoretical
framework. Section 3 presents the experiment itself and discusses the results. Finally, Section 4
draws a conclusion.
2 Presentation of the approach
2.1 Formal description of operational system vulnerabilities
It has been shown in [Dacier and Deswarte 1994] that the vulnerabilities exhibited by an operational
computing system can be represented in a privilege graph. In such a graph, a node X
represents a set of privileges owned by a user or a set of users (e.g., a Unix group). An arc from
node X to node Y indicates that a method exists for a user owning X privileges to obtain those
of node Y. Three classes of vulnerabilities may be identified:
. A vulnerability represented by an arc can be a direct security flaw, such as an easily
guessed password or bad directory and file protections enabling the implantation of a
Trojan horse.
. But a vulnerability is not necessarily a security flaw. Instead, it can result from the use
of a feature designed to improve security. For instance, in Unix, the .rhosts file enables
a user U1 to grant most of his privileges to another user U2 without disclosing his pas-
sword. This is not a security flaw if U1 trusts U2 and needs U2 to undertake some tasks
for him (less secure solutions would be to give his password or reduce his protection).
But if U2 grants some privilege to U3 (U2 is trusting U3), then by transitivity, U3 can
reach U1's privileges, even if U1 does not trust U3.
. A third class of arcs is the representation of privilege subsets directly issued from the
protection scheme. For instance, with Unix groups, there is an arc from each node representing
the privilege set of a group member to the node representing the privilege set
of the group.
Figure
1 gives an example of such a privilege graph with arcs being labeled by vulnerability
classes.
can guessY's password; 2) X is in Y's .rhosts; 3)Y is a subset of X; 4) X can attack Y via Email; 5)Y uses
a program owned by X; can modify a setuid program owned by Y.
Fig. 1: Example of a privilege graph
Some sets of privileges are highly sensitive (e.g., the superuser privileges). These nodes are
called "target" nodes since they are the most likely targets of attacks. On the other hand, it is
possible to identify some nodes as the privileges of possible attackers; these nodes will be
F
called "attacker" nodes. For example, we can define a node called "insider" which represents
the minimal privileges of any registered user (e.g., the privilege to execute login, to change his
own password, etc. If a path exists between an attacker node and a target node, then a security
breach can potentially occur since, by transitivity, a possible attacker can exploit system vulnerabilities
to obtain the target privileges.
In most real systems, such paths exist because a lot of possible vulnerabilities exist, even if
most of them cannot be exploited by an attacker. For instance, all passwords can be guessed
with some luck, but some passwords can be easily obtained by all "insiders" because they are
in a dictionary and automatic tools such as crack [Muffet 1992] can identify them in a short
time, while other passwords are much more complex and the only practical means to get them
is by exhaustive searching. This is true for each class of arcs: some vulnerabilities are easily exploitable
by an attacker (e.g., the arc corresponding to a group membership), while others may
request knowledge, competence, tenacity or luck. This means that even if a path exists between
an attacker node and a target node, the system security has a low probability to be defeated if
an attacker needs a lot of cleverness, competence or time to run through all the arcs composing
the path. With the definition given in Section 1, a measure of the difficulty for the attackers to
reach the targets would be a good measure of the security of the system. To assess this measure,
each arc in the privilege graph can be assigned a weight corresponding to the "effort" needed
for a potential attacker to perform the privilege transfer corresponding to this arc. This notion
of effort is encompassing several characteristics of the attack process such as pre-existing
attack tools, time needed to perform the attack, computing power available for the attacker, etc.
[Littlewood, Brocklehurst et al. 1993]. For example, the effort needed to obtain a password can
be assessed by the computing power and the time needed by crack to identify the password.
For a Trojan horse attack, the effort can be assessed as the competence needed to design the
Trojan horse, the time needed to implant it in a program which can be executed by the target
user, and the time needed for the target user to activate it (the latter does not depend on the
attack process, but only on the user behavior). The effort weight assigned to an arc is thus a
compound parameter, which can be represented as a rate of success for the corresponding elementary
attack.
The following section presents a model to compute the global privilege graph security
measure from the elementary arc weights.
2.2 Assumptions
In order to evaluate quantitative measures characterizing the operational security based on
the privilege graph, it is necessary to identify the scenarios of attacks that may be attempted by
a potential attacker to reach the target. First, we assume that the attacker is sensible and he will
not attempt an attack which would give him privileges he already possesses. Additional assumptions
are required to characterize the progress of the attacker towards the target. Different
models can be defined depending on the assumptions considered about the behavior of the at-
tacker. The first model that can be considered is to assume that the attacker chooses the shortest
path leading to the target (denoted as SP in the following), i.e. the one which has the lowest
mean value of cumulated effort. The shortest path can be evaluated directly from the privilege
graph taking into account the rates assigned to the arcs. However, this assumption implicitly
means that the attacker knows in advance the whole topology of the privilege graph. But, to
build the whole privilege graph the attacker needs all the sets of privileges described in the
graph. If the attacker already has these privileges, he does not need to build the graph! Clearly,
the shortest path assumption is not satisfactory. In the following, we will introduce two alternative
assumptions and show that the corresponding security measures are more instructive for
the security administrators than measure SP.
The attacker's privileges increase as a result of his progress towards the target can be characterized
by a state-transition graph where each state identifies the set of privileges that he has
already gained and transitions between states occur when the attacker succeeds in an attack allowing
him to acquire new privileges. In order to fully characterize the attack process state
graph, we need to specify an additional assumption which defines which attacks will be attempted
by the attacker at each step of the attack process. Two different assumptions are discussed
hereafter, each of them corresponding to a specific attack process model (i.e. attacker profile):
Total memory (TM) assumption: at each step of the attack process, all the possibilities of attacks
are considered (i.e. those from the newly visited node of the privilege graph and those
from the already visited nodes that he did not apply previously). At each step, the
attacker may choose one attack among the set of possible attacks.
Memoryless (ML) assumption: at each newly visited node of the privilege graph, the attacker
chooses one of the attacks that can be issued from that node only (without considering the
other attacks from the already visited nodes that he did not apply previously).
For both assumptions, it is assumed that the attack process stops when the target is reached.
We do not consider situations where attackers may give up or interrupt their process.
Figure
2 plots the state graph attack process associated with the example given in Figure 1
when assumptions TM and ML are considered. It is assumed that "insider" is the attacker node
and "A" is the target node. To improve the clarity of the figure, X admin and insider are respectively
referred to as X and I. It can be seen that the scenarios of attacks represented in Figure 2-b
correspond to a subset of those identified in Figure 2-a.
2.3 Mathematical model
In order to be able to compare the evolution of the security measures corresponding to
assumptions TM, ML and SP, we need to specify the mathematical model that is used to
evaluate the mean effort for an attacker to reach the target. Our investigations led us to choose
a Markovian model which satisfies some intuitive properties regarding security evolution (see
[Dacier, Deswarte et al. 1996a, Dacier, Deswarte et al. 1996b] for further details). The Markov
model is based on the assumption that the probability to succeed in a given elementary attack
before an amount of effort "e" is spent is described by an exponential distribution given by:
l is the rate assigned to the attack. Practical considerations derived
from the use of this distribution are the following:
. The potential attacker will eventually succeed in reaching the target, if a path leading to
the target exists, provided that he spends enough effort.
. The mean effort to succeed in a given attack is given by 1/l.
The last point is particularly valuable since the knowledge of the attack rates is sufficient to
characterize the whole distribution. The first point deserves some clarifications. In fact, as our
aim is to evaluate system resiliency to successful attacks with respect to a specified target, we
only consider scenarios of attacks that eventually lead to the target and not the scenarios which
may be aborted during the attack process.
Based on the Markovian assumption, each transition in the state transition attack process is
rated with the success rate of the corresponding vulnerability. Various probabilistic measures
can be derived from the model, among these, the mean effort for a potential attacker to reach
the specified target, denoted as METF (Mean Effort To security Failure, by analogy with Mean
Time To Failure). This metric allows easy physical interpretation of the results: the higher the
METF the better the security. Moreover, simple analytical expressions can be easily obtained
and analyzed in order to check the plausibility of model results.
The METF is given by the sum of the mean efforts spent in the states leading to the target
which are weighted by the probability of visiting these states. The mean effort spent in state j,
denoted as E j , is given by the inverse of the sum of state j's output transition rates:
(1)
out(j) is the set of states reachable in a single transition from state j and l ji is the transition
rate from state j to state i.
(a) TM assumption (b) ML assumption
Fig. 2: Attack process state graph associated to
the example of Figure 1I
FI
I
FI
Let us denote by METF k the mean effort when state k is the initial state and P ki the conditional
probability transition from state k to state i, then:
(2)
According to this model, the highest output conditional probabilities values correspond to
the transitions with the highest success rates.
Clearly, the complexity of the METF computation algorithm is related to the size of the
attack process state graph associated to the assumption adopted: for assumption TM, the
number of paths leading to the target to be considered is higher than the number of paths corresponding
to assumption ML.
2.4 Assumptions TM, ML and SP: expected behaviors
In the following, we analyze the expected behaviors of the METF when assumptions TM,
ML and SP are considered.
2.4.1 Single path
Let us consider first the example of a privilege graph containing a single path between the
attacker node and the target node (see Figure 3). In this case, the METF is given by
where k is the number of arcs in the path and l j is the success rate associated to the
elementary attack j. The same value of the METF is obtained when either assumption TM, ML
or SP is considered. Clearly, as the number of arcs increases, the METF increases and the security
improves. Also, when the values of l j increase, the METF decreases and the security
degrades.
Fig. 3: Markov model corresponding to a single path
2.4.2 Multiple paths
As regards the SP assumption, the shortest path is obtained by identifying in the privilege
graph all the direct paths from the attacker node to the target node and evaluating the minimum
value of the METF among the values computed for each direct path. A direct path from the attacker
to the target is such that each node that belongs to this path is visited only once. The
expression of METF SP is:
is the rate assigned to the arc i that belongs
to direct path k, n is the number of direct paths.
The METF values corresponding to assumptions TM or ML can be obtained by processing
the corresponding state transition attack process. Let us consider the example of Figure 4
where A is the attacker and D is the target. The privilege graph (Figure 4-a) indicates the pre-
Z
Y
sence of two paths leading to the target. The Markov models corresponding to assumptions ML
and TM are given in Figure 4-b and 4-c respectively. Application of relations (1) and (2) leads
to the following expressions.
It could be seen that, for any value of l 1 , l 2 and l 3 , the expression of METF TM is always
lower than (which corresponds to the case where only the first path exists), and to
(which corresponds to the case where only the second path exists). This result illustrates
the fact that the addition of new paths leading to the target in the privilege graph surely
leads to a decrease of METF TM which indicates security degradation. This result can be easily
generalized, further details can be found in [Dacier 1994].
However, assumption ML leads to a different behavior since METF ML may increase or decrease
depending on the values of the parameters. For instance, METF ML is lower than
only if , i.e., when the mean effort spent in obtaining the privileges
of node D from node C is lower than the mean effort corresponding to the initial path. This is
due to the fact that, with assumption ML and contrarily to assumption TM, when the attacker
chooses a given path he never backtracks until he reaches the target. If the modifications introduced
in the privilege graph lead to some additional paths which are shorter than those derived
from the initial privilege graph then METF ML decreases, otherwise the METF ML increases.
From the previous discussion, it can be concluded that METF TM is always lower than the
mean effort calculated based on the shortest path only (METF TM - METF SP ). For assumption
ML, METF ML may be lower or higher than METF SP depending on the values of the parameters
and the structure of the privilege graph.
The last property that is worth mentioning concerns the comparison of METF ML with
METF TM . Since the attack scenarios corresponding to assumption ML are a subset of those obtained
with assumption TM, it can be proved that, for the same privilege graph, assumption ML
leads to higher METF values than assumption TM: METF ML - METF TM .
(a) Privilege graph (b) Assumption ML (c) Assumption TM
Fig. 4: Multiple paths-example
l 1
l 1 l 3
l 2
l 3
l l
l 4
l 1
l 1 l 3
l 2 l 3
l 3
l 2 l 3
l 2 l 4
l 3
l 1 l 3
l 1 l 4
l 1
l 1 l 4
l 2 l 4
A D
ACD
AC
l 1
l3
ABD
A
ACD
AC
l3
ABD
l3
ABCD
2.5 Discussion
Based on the results of the previous section, Table 1 summarizes the expected behavior for
measures METF ML and METF TM when the number of paths between the attacker and the
target set of privileges increases.
It is noteworthy that we do not consider the simultaneous occurrence of several modifications
of the privilege graph (addition or deletion of vulnerabilities, or modification of the rates
assigned to the vulnerabilities). Different simultaneous modifications may influence diversely
the system security and thus it is difficult to predict the type of behavior to be observed for the
security measures. In operational systems, it is frequent that only one modification of the privilege
graph occurs at a time. If the privilege graph is constructed each time a modification
occurs, then it is likely that the typical behaviors reported in Table 1 will be always satisfied.
When only one modification of the privilege graph occurs, we should expect that:
. if the number of paths increases because of the addition of a new vulnerability,
METF TM decreases since this new path weakens the security of the target,
. when the shortest path between the attacker and the target decreases, METF TM decreases
and shows a degradation of security.
. as discussed in the previous section, two kinds of behavior may be observed for
. if the new path decreases the probability of taking another relatively easy path
to the benefit of a longer new one, METF TM may increase (indicated as
behavior 2 in Table 1).
. otherwise, METF ML should have the same evolution as METF TM : it should decrease
as the number of paths increases and reveal a degradation of security
(behavior 1).
Clearly, assumption TM allows easier interpretation of security evolution. By analyzing the
variation of this measure together with the modifications introduced in the privilege graph, the
security administrators can assess whether these modifications have a significant impact on se-
curity. Based on these results, they can identify the most critical paths in the privilege graph and
take appropriate decisions: either correct some system weaknesses (when security decreases)
or keep the system configuration unchanged if the risks induced by the modifications are not
significant (either security increases or only a small decrease in the METF is observed).
As regards assumption ML, the increase of METF ML when the number of paths increases
may be considered as unrealistic as it could mean that the security increases when additional
vulnerabilities are introduced. This kind of measure behavior is due to the ML attacker profile
Number of
Number of
Table
1: Typical behaviors
which assumes that when the attacker chooses a given path he never leaves it. If the top events
appearing in the selected path correspond to easy attacks, then the attacker is inclined to choose
this path. Then, if the following attacks require too much effort to succeed, the mean effort to
reach the target will increase. The main question is whether this type of attacker profile is realistic
or not. It is difficult to answer this question because of lack of real data. In the experiment
presented in the following section, we will show that valuable information about security evolution
can be provided to security administrators even when only model ML is considered.
Indeed, as we are mainly interested in METF variation rather than in the absolute values of this
measure, any significant variation of the METF has to be thoroughly examined.
Concerning the shortest path, it is clear that the information provided by this measure is incomplete
as only one path in the privilege graph is taken into account. Therefore, the security
variation due to the presence of the other paths will not be identified if only the shortest path is
computed to monitor the operational security.
3 Experiment
3.1 Tools description
The experiment presented in this section has been conducted using a set of tools. The main
steps of the evaluation process are:
1) Definition of the security policy: For each security objective chosen for the system, the relevant
security targets (sets of privileges that must be protected), and the potential attackers
(sets of privileges against which targets should be protected) are identified. Each
attacker-target pair corresponds to two sets of nodes in the privilege graph for which one
quantitative evaluation is needed. A tool has been developed to describe formally the security
objectives from which all pairs are identified and gathered into a file.
Probing the system and building the privilege We have developed a tool named
ASA, for Automatic Security Advisor, which looks for known vulnerabilities in the Unix
system under study and builds the related privilege graph. The tool runs with extended
privileges in order to be able to analyze all parts of the system. So far, ASA is using many
procedures included in the COPS package [Farmer and Spafford 1990]. More precisely,
like in COPS, some Unix scripts scan the Unix file system, gathering information about
the access permissions of several files either for each user or for specific directories.
A crack program is run to guess user passwords using a standard dictionary. Each time
a vulnerability is detected in the system, an arc is added to the privilege graph under cons-
truction. As we do not know at this step of the analysis if the potential vulnerability identified
is a relevant security flaw, no correction is attempted. The output of the ASA tool
is therefore a privilege graph describing all known vulnerabilities of the target system at
the time of the snapshot. After probing the system, the privilege graph built may be recorded
in an archive. This archive will be regularly augmented by using classical Unix
tools such as cron to allow automatic and periodic analysis of the system.
Subsequently, another tool computes the security measures presented in
Section 2 for each security objective. These computations can be applied to either a single
privilege graph or a whole archive.
of security-relevant events: Last, to ease the analysis of the security measures
computed, a tool identifies for each significant variation of a measure the security
events that have caused it. More precisely, it looks for the arcs involved in the paths
between the attacker and target sets that changed between two consecutive privilege gra-
phs. This helps to identify the event(s) that caused this measure evolution. An example
of the output of this tool is given in Annex A.
3.2 Target system description
The system under observation in this experiment is a large distributed computer system of
more than a hundred different workstations connected to a local area network. There are about
700 users sharing one global file system. During the experiment, the total number of users have
changed frequently due to the arrival and departure of temporary staff (a frequent event for the
target system). The probing of security vulnerabilities is made on the global file system. In this
experiment, the system has been observed during 13 months on a daily basis, starting in
June 1995 until the end of July 1996. The archive of privilege graphs contains 385 items (one
for each day).
In the target system of this experiment, security is not a main concern of the users. Since no
critical information is stored in the system, it is not necessary to enforce a strong global security
policy, even if sometimes some users or the system administrators might worry about it for personal
reasons, or for safety reasons. This explains the important number of known
vulnerabilities that will be shown hereafter. It is noteworthy that most vulnerabilities persist
and are accepted because they often provide useful and convenient functionalities.
Furthermore, our main objective being to validate the behavior of the security measures, we
only observed the "natural" evolution of the system. We did not try to convince the users to
remove the vulnerabilities we had identified to improve the system security.
3.3 Experiment settings
3.3.1 Security objectives
Evaluating security measures requires that relevant sets of target(s) and attacker(s) be defi-
ned. These pairs are related to the security objectives one would like the system to fulfill as
much as possible. For a Unix system, one important target to protect against attacks is the root
account, and more generally, every account allowing to obtain superuser * privileges. Another
interesting target to study is the group of all system administrators, giving access to all the data
they share. To select a precise attacker we choose the "insider" set of privileges defined in
Section 2. Table 2 summarizes these case studies.
For the analysis of the second security objective, one problem appears due to the existence
of superusers in Unix. A superuser is able to get all the privileges of any other user in the sys-
tem. Thus, when we consider a target set of users to protect, this mechanism implicitly leads us
* In Unix, the superuser privilege bypasses the access control mechanisms.
to include the superuser in it (as this set of privileges includes any other set). If we had considered
objective 2 would have included objective 1, as if one sequence of privilege transfer
methods enables to defeat objective 1 it also defeats objective 2. In order to have completely
distinct case studies, we did not consider vulnerabilities linked to superuser properties for objective
2. We then removed from the privilege graphs all the instantaneous arcs going directly
from the superuser to the admin_group set of privileges.
3.3.2 Vulnerabilities
From all the known vulnerabilities in Unix, we monitored 13 among the most common, in-
cluding: password checking (with crack software); user-defined privilege transfer methods
incorrect/exploitable permissions on setuid files, .rhosts files or initialization
files (.profile, .cshrc, etc.); incorrect path search that allows Trojan horse attacks; etc.
A more detailed review of all the classical Unix vulnerabilities can be found in [Garfinkel and
Spafford 1995]. In addition to this, specific arcs labeled "instantaneous" correspond to inclusions
of privilege sets, for example, between one user node and all the nodes of the Unix groups
he belongs to.
Security state modifications, or events, occur when vulnerabilities are either created or eliminated
(arcs in the privilege graph are added or deleted) or when the value associated to one
vulnerability (the weight assigned to an arc) changes. Such events occurred frequently during
the experiment.
3.3.3 Quantification
For the experiment, we defined a four level classification scale (see Table 3), where the different
levels differ from each other by one order of magnitude, to rate the different
vulnerabilities: level1 corresponds to the easiest elementary attacks, and level4 to the most
difficult ones.
The various levels assigned to each attack are rather arbitrary. Evaluating precisely the success
rate of the various attacks present in the system would have required additional tools (such
as for recording user profiles) that are not currently available in our prototype. However, this is
Attacker Target
Objective 1 insider root
Objective 2 insider admin_group
Table
2: Security objectives
Name Weight
Table
3: Attack success rate levels
not a serious drawback as this experiment aims primarily at validating the security measures
behavior rather than precisely rate the security of the system.
3.4 Experiment results
The results of this experiment corresponding to objectives 1 and 2 are presented in Figure 5
and
Figure
6 respectively. The measures presented are: the number of paths found between attacker
and target sets, METF SP , METF TM and METF ML . The list of corresponding security
events is given in Annex A.
METF TM can only be computed when the number of paths between the attacker and the
target is relatively small. Thus, the thick line curves in the two graphics sometimes present gaps
Fig. 5: Measures evolution for objective 1
insider - root110006/95 07/95 08/95 09/95 10/95 11/95 12/95 01/96 02/96 03/96 04/96 05/96 06/96 07/96 08/96
insider - root
due to the uncomputability of this measure (unfortunately, very large gaps appear in Figure 6).
For each significant measure variation, the cause has been analyzed and a detailed description
is given in Annex A.
Experiment feedback
In the following, we analyze a subset of the events included in AnnexA to check the assumptions
and expected behaviors discussed in Section 2.4 and Section 2.5.
Fig. Measures evolution for objective 2
insider - admin_group10006/95 07/95 08/95 09/95 10/95 11/95 12/95 01/96 02/96 03/96 04/96 05/96 06/96 07/96 08/96
insider - admin_group
3.5.1 Events #2, #7 and #11 for objective 1
For objective 1, the events #2, #7 and #11 exhibit a global behavior of type 1 (see Table 1).
Each of these events satisfies the conditions in which such behavior should be observed: they
add one new vulnerability to those already available to the attacker to reach the root target,
therefore increasing the total number of possible paths between the attacker node and the tar-
get. Furthermore, the shortest path does not evolve because these new paths are not shorter than
the previous shortest one. More precisely, the vulnerabilities corresponding to these events are
described in Table 4 (extracted from Table 5):
As can be seen in Figure 5 and Figure 6 and in the detailed table of Annex A, METF TM
always decreased as the result of occurence of each of these events, showing a degradation of
security. The amplitude of this evolution is variable (depending on the difficulty related to the
new vulnerability and on its relative position with respect to previously existing paths). In fact,
this has been verified for METF TM on every single degradation of the security, but sometimes,
the relative variation of this measure is very small and is not visible on the plots. Therefore, in
the whole experiment, the behavior of METF TM was in agreement with the expectations. Mo-
reover, for each of the events #2, #7 and #11, METF ML evolution is similar to the evolution of
METF TM .
3.5.2 Event #11 for objective 2
For objective 2, the event #11 has a different impact on the security measures that illustrates
behavior 2. In this case, an increase in the number of paths between the attacker and the target
has lead to a decrease of the METF TM and an increase of METF ML . We expected that the
METF ML and METF TM measures would not always evolve in the same direction. This happens
when a secondary path appears that lengthens a previous path. It influences the METF ML that
shows an improvement by reducing the probability of selecting a fast path, while METF TM ,
only affected by the fact that a new path has been created, shows a degradation.
3.5.3 Event #24 and period P for objective 1
At the beginning of March 1996, a strong decrease of the shortest path between the attacker
and the target of objective 1 occurred (Event #24). Figure 5 shows that METF ML and METF TM
decrease as a result of this evolution.
Event Date Problem
One user grants write permissions to everyone for his home directory
(allowing to implant a Trojan horse that careless users could
activate for example).
Another user grants write permissions to everyone on his .login
initialization file, allowing a major Trojan horse attack that would
be activated at his next login.
Oct 95
A third user grants write permission to everyone on his .rhosts fi-
le, enabling an immediate attack via the identity transfer mechanisms
of Unix.
Table
4: Examples of vulnerabilities leading to behavior 1
Period P that followed event #24 also exhibits an interesting behavior. During this period, it
can be seen by comparing the two curves of Figure 5, that METF TM was nearly equal to
. The influence of the shortest path is so important here that its length directly controls
the value of METF TM and the behavior of METF ML . We are in the case where it is possible for
the attacker to reach the target in a few very easy steps.
In such a situation, it is clear that the target is not well protected. Furthermore, as its security
is directly affected by the vulnerabilities appearing in the shortest path, it would be mandatory
to react and disable such vulnerabilities.
3.5.4 Events #6 and #13 for objectives 1 and 2
On both Figure 5 and Figure 6, we also observed a similar phenomena: sudden important increases
of the number of paths (at November 1995 for objective 1, and at the end of
August 1995 and November 1995 for objective 2). The problem involved was an incorrect positioning
of the write permission for the others field of Unix permissions on an important
initialization file. This created a path to the target for nearly every user in the system, and thus
provided the "insider" user with as much intermediate initial paths as there were vulnerable
users in the overall system.
This phenomenon is normal, and is a very security-relevant event, but disturbs the evaluation
of security for two reasons:
. first, the dramatic increase in the number of paths precludes the computation of METF TM
(that should have shown a decrease in security);
. second, all the new paths being longer than the previous ones, METF ML increases. In fact
the "insider" attacker is much more likely to choose a long path and spend a lot of effort
in the system before reaching the target, and the METF ML is sensitive to that. This is a
normal evolution of METF ML , but may not be a satisfying indication of the overall security
evolution.
However, in these cases, the dramatic increase of the number of paths between the attacker
and the target indicates directly that thorough security analysis must be performed.
3.5.5 Event #16 for objective 2
In order to validate our modeling assumptions , we need to consider the evolution of the security
measures when only single events occur since the evolution cannot be predicted when
several conflicting events occur. For instance, the consequence of event #16 on objective 2
seems to contradict the conclusions of Section 3.4.2.1: it shows an increase of the number of
paths between the attacker and the target while both security measures METF TM and METF ML
increase. But this evolution is due to the occurrence of several simultaneous security events
within the period of one observation of the system (1 day). Such situation occurred more than
once during the experiment.
* In fact, the difference between measures TM and SP is very small (~10 -3 ), of course, this is not direclty
visible on the plot.
In fact, when looking more closely to the description of event #16 given in Table 5, it can be
seen that three vulnerabilities have been disabled for two different users, and that one user has
enabled a new one. The first two events should have a positive influence on security while the
last one should have an opposite effect (it increases the number of paths). The evolutions of the
measures seem to indicate that the last one has the least impact.
3.6 Comparison of the various measures
In addition to the results discussed in Section 3.5, we make in the following some other remarks
about the comparison of the different measures shown in Figure 5 and Figure 6.
3.6.1 Shortest path
During all the period covered by the experiment, the shortest path evolved only 3 times (once
for objective 1 and twice for objective 2). This measure provides an interesting information
about the easiest path present in the system, however it is not dynamic enough to be useful for
operational monitoring of the security evolution. As indicated in Section 2.5, in comparison
with METF TM , the value of the shortest path does not take into account the fact that several
equivalent paths could be available. In fact, more than its length, it is the nature of this path and
of the vulnerabilities involved that can be of interest to improve the security as it is the path that
seems to have the major impact on METF ML and METF TM .
3.6.2 Number of paths
The number of paths between the attacker and the target is a sensitive information (it varies
a lot) but it seems difficult to use alone for operational security monitoring.
First, it can be noticed that a security event leading to a decrease or an increase of the number
of paths between the attacker and the target does not necessarily lead to a significant variation
of the other security measures (see #1, 18, 19, 20, 22 of Table 5). Theoretically, it seems possible
to ignore such security events that have a minor influence on the mean effort to be spent by
an attacker to reach the target. We are in the case where the impact of the addition or deletion
of arcs in the privilege graph is relatively small compared to the global effort values even if the
number of paths varies.
On the contrary, we have identified some events that led to a significant evolution of
METF ML or METF TM whereas the number of paths changed slightly: see #2, 3, 4, 11, 12, 16,
17, 21, 23 in
Table
5. We are therefore able to detect these particular events that have an important
influence on the security of the system without significantly affecting the number of paths
between the attacker and the target.
Globally, we can see that the number of paths existing between the attacker and the target is
a measure that would raise an important number of alarms among which some may be relatively
uninteresting. Moreover, not all important security events would lead to the raise of an
alarm. Consequently, this measure seems more difficult to use than METF ML and METF TM
and it is less reliable.
3.6.3 METF TM and METF ML
These measures exhibit an interesting behavior, with stability periods separated by several
variations. As can be seen in Annex A, each of these variations can be related to a security-relevant
event. However, we can also see that METF TM cannot be computed all the time, which
is a main drawback; and that METF ML sometimes exhibit a delicate behavior in which it shows
an increase of the effort needed by the attacker to reach the target when the number of paths
between them increases (behavior 2). This weakens the confidence we can have in METF ML ,
all the more when a single security event such as #6 or #13 can lead to a great increase of that
measure.
However, it seems possible to rely on METF ML to reveal the degradation of the security of
the target, and to react adequately to the most significant security events (on the contrary of the
number of paths).
Among all the measures, METF TM seems to exhibit the most plausible behavior. Additional
work would be needed to reduce the complexity of the algorithm used to compute it and then
to obtain the values that we miss here for a complete comparison with METF ML .
3.7 Comparison with other security monitoring tools
Usually, tools dedicated to operational security analysis such as COPS or SATAN limit their
action to a direct check of the system that ends with a list of all the known vulnerabilities present
in it, possibly sorted in different classes according to their severity. As the prototype
presented in this paper is heavily based on such tools (see ASA description in Section 3.1), we
gathered these data and it is also interesting to compare the information provided on security at
the end of this first step of the prototype and after the complete evaluation method is performed.
Figure
7 plots the evolution of the number of known vulnerabilities in the system during the
whole experiment. Figure 8 shows the same results, but details the distribution of the vulnerabilities
audited among the various severity levels considered.
If we were to use directly the information provided by Figure 7 or Figure 8 to monitor the
security of the system, we can see that the number of alarms we would be faced to would be
very important. In fact, each time a new security event occurs in the system, we would be obli-
Fig. 7: Evolution of the total number
of vulnerabilities
Fig. 8: Evolution of the distribution
of vulnerabilities200600100006/95 09/95 12/95 03/96 06/96200600100006/95 09/95 12/95 03/96 06/96
level1 level2 level3 level4
ged to analyze it more precisely, even if it is a minor event, because we do not know exactly its
influence on the security objectives. Probably, to reduce the number of alarms, one would try
to take into account the severity level of the new vulnerabilities involved. However, we can see,
by comparing Figure 8, Figure 5 and Figure 6, that an evolution of the number of severe vulnerabilities
(level1 or level2) present in the system and a decrease of the overall security are
not always correlated.
Of course, our intention is not to depreciate the value of the results obtained by such automatic
tools: it is an essential first step to handle operational security monitoring, and our
evaluation method is mostly based on the data provided by these tools. However, and it is a well
known problem, the number of alarms raised by such tools is important, and all of them cannot
always be taken care of easily. The evaluation measures presented in the previous section
enable the security administrator to extract from all these variations the ones that would really
need reaction. Therefore, the results obtained by our evaluation method are complementary to
those derived from classical security analysis tools.
In this paper, we have presented an approach aiming at the quantitative evaluation of the security
of operational systems. The evaluation is based on a theoretical model, called the
privilege graph, which describes the system vulnerabilities that may offer opportunities to potential
attackers to defeat some security objectives. We have studied several modeling
assumptions and discussed the validity of these assumptions based on an experimental study
performed on a real system. Three different models are discussed corresponding to three assumptions
about the attacker behavior: SP, TM and ML. The experiment results show that
assumption TM is satisfactory because the behavior of the corresponding measure provides
useful feedback to the security administrators in order to monitor the security of their system;
i.e. evaluate the impact of the system vulnerabilities on the security objectives and identify the
vulnerabilities which may lead to security degradation. Unfortunately, the security measure associated
to assumption TM can not be always computed due to the complexity of the algorithm.
On the other hand, the computation of the measure related to assumption ML is easier. Howe-
ver, it is more difficult for the security administrators to identify the appropriate actions to be
done on the system based on the observed behavior of this measure only. In fact, in this case,
any variation of the security measure should be carefully analyzed whereas, for assumption
TM, only negative variation of the measure can be considered. Finally, it is concluded that the
shortest path, the number of vulnerabilities and the number of paths are not sufficient to characterize
the operational security evolution.
The experimental results presented in this paper and the modeling assumptions considered
constitute a preliminary investigation about the feasibility of security evaluation of operational
systems taking into account their dynamic evolution, and about the benefits of these kinds of
evaluations. Further work is needed to improve the accuracy of the measures considered in
order to improve our confidence in them, and help the security administrators in better monitoring
the security of their systems.
Acknowledgments
The authors are grateful to Marc Dacier, now at IBM Z-rich, for his remarks on an early version
of this paper and for his pioneering work on quantitative evaluation of security. We also
thank the anonymous referees for their helpful reviews. This work has been partially supported
by UAP Assurances, and by the European Esprit Project n-20072 "Design for Validation"
(DeVa).
--R
Towards Quantitative Evaluation of Computer Security
"The Privilege Graph: an Extension to the Typed Access Matrix Model"
"Models and Tools for Quantitative Assessment of Operational Security"
Quantitative Assessment of Operational Security: Models and Tools
"The COPS Security Checker System"
"Towards Operational Measures of Computer Security"
"Crack Version 4.1 - A Sensible Password Checker for Unix"
"Towards Operational Measures of Computer Security: Experimentation and Modelling"
--TR
--CTR
David M. Nicol, Modeling and Simulation in Security Evaluation, IEEE Security and Privacy, v.3 n.5, p.71-74, September 2005
Michael Yanguo Liu , Issa Traore, Empirical relation between coupling and attackability in software systems:: a case study on DOS, Proceedings of the 2006 workshop on Programming languages and analysis for security, June 10-10, 2006, Ottawa, Ontario, Canada
Paul Ammann , Duminda Wijesekera , Saket Kaushik, Scalable, graph-based network vulnerability analysis, Proceedings of the 9th ACM conference on Computer and communications security, November 18-22, 2002, Washington, DC, USA
B. B. Madan , K. S. Trivedi, Security modeling and quantification of intrusion tolerant systems using attack-response graph, Journal of High Speed Networks, v.13 n.4, p.297-308, January 2004
Joseph Pamula , Sushil Jajodia , Paul Ammann , Vipin Swarup, A weakest-adversary security metric for network configuration security analysis, Proceedings of the 2nd ACM workshop on Quality of protection, October 30-30, 2006, Alexandria, Virginia, USA
Shuo Chen , Jun Xu , Zbigniew Kalbarczyk , Ravishankar K. Iyer , Keith Whisnant, Modeling and evaluating the security threats of transient errors in firewall software, Performance Evaluation, v.56 n.1-4, p.53-72, March 2004
Somesh Jha , Jeannette M. Wing, Survivability analysis of networked systems, Proceedings of the 23rd International Conference on Software Engineering, p.307-317, May 12-19, 2001, Toronto, Ontario, Canada
Bharat B. Madan , Katerina Goeva-Popstojanova , Kalyanaraman Vaidyanathan , Kishor S. Trivedi, A method for modeling and quantifying the security attributes of intrusion tolerant systems, Performance Evaluation, v.56 n.1-4, p.167-186, March 2004
Yu-Sung Wu , Bingrui Foo , Yu-Chun Mao , Saurabh Bagchi , Eugene H. Spafford, Automated adaptive intrusion containment in systems of interacting services, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.5, p.1334-1360, April, 2007
Yanguo (Michael) Liu , Issa Traore, Complexity Measures for Secure Service-Oriented Software Architectures, Proceedings of the Third International Workshop on Predictor Models in Software Engineering, p.11, May 20-26, 2007
Huseyin Cavusoglu , Srinivasan Raghunathan, Configuration of Detection Software: A Comparison of Decision and Game Theory Approaches, Decision Analysis, v.1 n.3, p.131-148, September 2004
Y. Karabulut , F. Kerschbaum , F. Massacci , P. Robinson , A. Yautsiukhin, Security and Trust in IT Business Outsourcing: a Manifesto, Electronic Notes in Theoretical Computer Science (ENTCS), 179, p.47-58, July, 2007
Wang , Bharat B. Madan , Kishor S. Trivedi, Security analysis of SITAR intrusion tolerance system, Proceedings of the ACM workshop on Survivable and self-regenerative systems: in association with 10th ACM Conference on Computer and Communications Security, p.23-32, October 31-31, 2003, Fairfax, VA
Wei Li , Rayford B. Vaughn , Yoginder S. Dandass, An Approach to Model Network Exploitations Using Exploitation Graphs, Simulation, v.82 n.8, p.523-541, August 2006
Algirdas Avizienis , Jean-Claude Laprie , Brian Randell , Carl Landwehr, Basic Concepts and Taxonomy of Dependable and Secure Computing, IEEE Transactions on Dependable and Secure Computing, v.1 n.1, p.11-33, January 2004
David M. Nicol , William H. Sanders , Kishor S. Trivedi, Model-Based Evaluation: From Dependability to Security, IEEE Transactions on Dependable and Secure Computing, v.1 n.1, p.48-65, January 2004 | operational vulnerabilities;quantitative evaluation;privilege graph;security assessment |
325399 | Systematic Formal Verification for Fault-Tolerant Time-Triggered Algorithms. | AbstractMany critical real-time applications are implemented as time-triggered systems. We present a systematic way to derive such time-triggered implementations from algorithms specified as functional programs (in which form their correctness and fault-tolerance properties can be formally and mechanically verified with relative ease). The functional program is first transformed into an untimed synchronous system and, then, to its time-triggered implementation. The first step is specific to the algorithm concerned, but the second is generic and we prove its correctness. This proof has been formalized and mechanically checked with the PVS verification system. The approach provides a methodology that can ease the formal specification and assurance of critical fault-tolerant systems. | Introduction
Synchronous systems are distributed computer systems where there are known
upper bounds on the time that it takes nonfaulty processors to perform certain op-
erations, and on the time that it takes for a message sent by one nonfaulty processor
to be received by another. The existence of these bounds simplifies the development
of fault-tolerant systems because nonfaulty processes executing a common
algorithm can use the passage of time to predict each others' progress. This property
contrasts with asynchronous systems, where there are no upper bounds on
processing and message delays, and where it is therefore provably impossible to
achieve certain forms of consistent knowledge or coordinated action in the presence
of even simple faults [6, 13].
For these reasons, fault-tolerant systems for critical control applications in air-
craft, trains, automobiles, and industrial plants are usually based on the synchronous
approach, though they differ in the extent to which the basic mechanisms
of the system really do guarantee satisfaction of the synchrony assumption. For
example, process scheduling algorithms that can miss deadlines, buffer overflows,
and contention buses such as Ethernet can all lead to violations of the synchrony
assumption, but may be considered "good enough" in less than truly critical appli-
cations. Those applications that are truly critical, however, often build on mechanisms
that are not merely synchronous but synchronized and time-triggered: the
clocks of the different processors are kept close together, processors perform their
actions at specific times, and tasks and messages are globally and statically sched-
uled. The Honeywell SAFEbus TM [1,17] that provides the safety-critical backplane
for the Boeing 777 Airplane Information Management System (AIMS) [31,39], the
control system for the Shinkansen (Japanese Bullet Train) [16], and the Time-Triggered
Protocol (TTP) proposed for safety-critical automobile functions [21] all
use this latter approach.
A number of basic functions have been identified that provide important building
blocks in the construction of fault-tolerant synchronous systems [8, 10]; these
include consensus (also known as interactive consistency and Byzantine agree-
ment) [33], reliable and atomic broadcast [9], and group membership [7]. Numerous
algorithms have been developed to perform these functions and, because of
their criticality and subtlety, several of them have been subjected to detailed formal
[15, 23, 43] and mechanically checked [2, 26-28, 34] verifications, as have their
combination into larger functions such as diagnosis [25], and their synthesis into a
fault-tolerant architecture based on active (state-machine) replication [11, 35].
Formal, and especially mechanically-checked, verification of these algorithms is
still something of a tour de force, however. To have real impact on practice, we
need to reduce the difficulty of formal verification in this domain to a routine and
largely automated process. In order to achieve this, we should study the sources
of difficulty in existing treatments and attempt to reduce or eliminate them. In
particular, we should look for opportunities for systematic treatments: these may
allow aspects common to a range of algorithms to be treated in a uniform way, and
may even allow some of those aspects to be broken out and verified in a generic
manner once and for all.
There is a wide range in the apparent level of difficulty and detail in the verifications
cited above. Some of the differences can be attributed to the ways in which
the problems are formalized or to the different resources of the formal specification
languages and theorem provers employed. For example, Rushby [34] and Bevier
and Young [2] describe mechanically checked formal verifications of the same "Oral
Messages" algorithm [24] for the consensus problem that were performed using different
verification systems. Young [42] argues that differences in the difficulty of
these treatments (that of [34] is generally considered simpler and clearer than that
of [2]) are due to different choices in the way things are formalized. We may assume
that such differences will be reduced or eliminated as experience is gained and the
better choices become more widely known.
More significant than differences due to how things are formalized are differences
due to what is formalized, and the level of detail considered necessary. For exam-
ple, both verifications of the Oral Messages algorithm mentioned above specify
the algorithm as a functional program and the proofs are conventional inductions.
Following this approach, the special case of a two-round algorithm (a variant of the
algorithm known as OM(1)) is specified in [28] in a couple of lines and its verification
is almost completely automatic. In contrast, the treatment of OM(1) in [23]
is long and detailed and quite complicated. The reason for its length and complexity
is that this treatment explicitly considers the distributed, message passing
character of the intended implementation, and calculates tight real-time bounds
on the timeouts employed. All these details are abstracted away in the treatments
using functional programs-but this does not mean these verifications are inferior
to the more detailed analyses: on the contrary, I would argue that they capture
the essence of the algorithms concerned (i.e., they explain why the algorithm is
fault tolerant) and that message-passing and real-time bounds are implementation
details that ought to be handled separately. In fact, most of the papers that introduce
the algorithms concerned, and the standard textbook [29], use a similarly
abstract and time-free treatment. On the other hand, it is undeniably important
also to verify a specification that is reasonably close to the intended implementa-
tion, and to establish that the correct timeouts are used, and that the concrete
fault modes match those assumed in the more abstract treatment.
The natural resolution for these competing claims for abstractness and concreteness
is a hierarchical approach in which the essence of the algorithm is verified in
an abstract formulation, and a more realistic formulation is then shown to be a
refinement, in some suitable sense, of the abstract formulation. If things work
out well, the refinement argument should be a routine calculation of timeouts and
other concrete details. The purpose of this paper is to present a framework for such
a hierarchical treatment and to show that, for the important case of time-triggered
implementations of round-based algorithms, most of the details of the refinement
to a concrete formulation can be worked out once and for all.
Systematic Formal Verification for Time-Triggered Algorithms 5
Round-Based Algorithms
In her textbook [29], Nancy Lynch identifies algorithms for the synchronous
system model with those that execute in a series of "rounds." Rounds have two
phases: in the first, each processor 1 sends a message to some or all of the other
processors (different messages may be sent to different processors; the messages
will depend on the current state of the sending processor); in the second phase,
each processor changes its state in a manner that depends on its current state
and the collection of messages it received in the first phase. There is no notion of
real-time in this model: messages are transferred "instantaneously" from senders
to recipients between the two phases. The processors operate in lockstep: all of
them perform the two phases of the current round, then move on to the first phase
of the next round, and so on.
Several of the algorithms of interest here were explicitly formulated in terms of
rounds when first presented, and others can easily be recast into this form. For
example, the Oral Messages algorithm for consensus, OM(1), requires two rounds
as follows.
Algorithm OM(1)
Round 0:
Communication Phase: A distinguished processor called the transmitter
sends a value to all the other processors, which are called receivers; the
receivers send no messages.
Computation Phase: Each receiver stores the value received from the
transmitter in its state.
Round 1:
Communication Phase: Each receiver sends the value it received from the
transmitter to all the other receivers; the transmitter sends no message.
Computation Phase: Each receiver sets the "decision" component of its
state to the majority value among those received from the other receivers
and that (stored in its state) received from the transmitter.
In the presence of one or fewer arbitrary faults, OM(1) ensures that all nonfaulty
receivers decide on the same value and, if the transmitter is nonfaulty, that value
is the one sent by the transmitter.
There are two different ways to implement round-based algorithms. In the time-triggered
approach, the implementation is very close to the model: the processors
are closely synchronized (e.g., to within a couple of bit-times in the case of
I refer to the participants as processors to stress that they are assumed to fail independently;
the agents that perform these actions will actually be processes.
SAFEbus) and all run a common, deterministic schedule that will cause them to
execute specific algorithms at specific times (according to their local clocks). The
sequencing of phases and rounds is similarly driven by the local clocks, and communication
bandwidth is also allocated as dedicated, fixed, time slots. The first
(communication) phase in each round must be sufficiently long that all nonfaulty
processors will be able to exchange messages successfully; consequently, no explicit
timeouts are needed: a message that has not arrived by the time the second
(computation) phase of a round begins is implicitly timed out.
Whereas the allocation of resources is statically determined in the time-triggered
approach, in the other, event-triggered, approach, resources are scheduled dynamically
and processors respond to events as they occur. In this implementation
style, the initiation of a protocol may be triggered by a local clock, but subsequent
phases and rounds are driven by the arrival of messages. In Lamport and Merz'
treatment of OM(1), for example, a receiver that has received a message from the
transmitter may forward it immediately to the other receivers without waiting for
the clock to indicate that the next round has started (in other words, the pacing
of phases and rounds is determined locally by the availability of messages). Unlike
the time-triggered approach, messages may have to be explicitly timed out in
the event-triggered approach. For example, in Lamport and Merz' treatment of
OM(1), a receiver will not wait for relayed messages from other receivers beyond
past the start of the algorithm (where ffi is the maximum communication
delay and ffl the maximum time that it can take a receiver to decide to relay a
message).
Some algorithms were first introduced using an event-triggered formulation (for
example, Cristian's atomic broadcast and group membership algorithms [7, 9]),
but it is possible to reconstruct explicitly round-based equivalents for them, and
then transform them to time-triggered implementations (Kopetz' time-triggered
algorithms [19] for the same problems do this to some extent). Event-triggered
systems are generally easier to construct than time-triggered ones (which require
a big planning and scheduling effort upfront) and achieve better CPU utilization
under light load. On the other hand, Kopetz [20,21] argues persuasively that time-triggered
systems are more predictable (and hence easier to verify), easier to test,
make better use of broadcast communications bandwidth (since no addresses need
be communicated-these are implicit in the time at which a message is sent), can
operate closer to capacity, and are generally to be preferred for truly critical appli-
cations. The previously mentioned SAFEbus for the Boeing 777, the Shinkansen
train control system, and the TTP protocol for automobiles are all time-triggered.
Our goal is a systematic method for transforming round-based protocols from
very abstract functional programs, whose properties are comparatively easy to
formally and mechanically verify, down to time-triggered implementations with
appropriate timing constraints and consideration for realistic fault modes. The
transformation is accomplished in two steps: first from a functional program to
an (untimed) synchronous system, then to a time-triggered implementation. The
first step is systematic but must be undertaken separately for each algorithm (see
Systematic Formal Verification for Time-Triggered Algorithms 7
Section 4); the other is generic and deals with a large class of algorithms and fault
assumptions in a single verification. This generic treatment of the second step is
described in the following section.
Round-Based Algorithms Implemented as Time-Triggered Systems
The issues in transforming an untimed round-based algorithm to a time-triggered
implementation are basically to ensure that the timing and duration of events in
the communication phase are such that messages between nonfaulty processors
always arrive in the communication phase of the same round, and fault modes
are interpreted appropriately. To verify the transformation, we introduce formal
models for untimed synchronous systems and for time-triggered systems, and then
establish a simulation relation between them. This treatment has been formalized
and mechanically checked using the PVS verification system-see Section 3.4.
3.1 Synchronous Systems
For the untimed case, we use Nancy Lynch's formal model for synchronous
systems [29, Chapter 2], with some slight adjustments to the notation that make
it easier to match up with the mechanically verified treatment.
Untimed Synchronous Systems.
We assume a set mess of messages that includes a distinguished value null, and
a set proc of processors. Processors are partially connected by directed channels;
each channel can be thought of a buffer that can hold a single message. Associated
with each processor p are the following sets and functions.
ffl A set of processors out-nbrs p
to which p is connected by outgoing channels.
ffl A set of processors in-nbrs p
to which p is connected by incoming channels;
the function inputs p
mess gives the message contained in each
of those channels.
ffl A set states p
of states with a nonempty subset init p
of initial states. It is
convenient to assume that there is a component in the state that counts
rounds; this counter is zero in initial states.
ffl A function msg p
states p
\Theta out-nbrs p
mess that determines the message
to be placed in each outgoing channel in a way that depends on the current
state.
ffl A function trans p
states p
\Theta inputs p
states p
that determines the next
state, in a way that depends on the current state and the messages received
in the incoming channels.
The system starts with each processor in an initial state. All processors p then
repeatedly perform the following two actions in lockstep.
Communication Phase: apply the message generation function msg p
to the current
state to determine the messages to be placed in each outgoing channel.
(The message value null is used to indicate "no message.")
Computation Phase: apply the state transition function trans p
to the current
state and the message held in each incoming channel to yield the next state
(with the round counter incremented).A particular algorithm is specified by supplying interpretations to the various sets
and functions identified above.
Faults. Distributed algorithms are usually required to operate in the presence of
faults: the specific kinds and numbers of faults that may arise constitute the fault
hypothesis. Usually, processor faults are distinguished from communication faults;
the former can be modeled by perturbations to the transition functions trans p
and the latter by allowing the messages received along a channel to be changed
from those sent. Following [29, page 20], an execution of the system is then an
infinite sequence of triples
is the global state at the start of round r, M r
is the collection of messages
placed in the communication channels, and N r
is the (possibly different) collection
of messages received.
Because our goal is to show that a time-triggered implementation achieves the
same behavior as the untimed synchronous system that serves as its specification,
we will need some way to ensure that faults match up across the two systems. For
this reason, I prefer to model processor and communications faults by perturbations
to the trans p
and msg p
respectively (rather than allowing messages
received to differ from those sent). In particular, I assume that the current round
number is recorded as part of the state and that if processor p is faulty in round
r, with current state s and the values of its input channels represented by the
array i, then trans p
(s; i) may yield a value other than that intended; similarly,
if the channel from p to q is faulty, then the value msg p
(s)(q) may be different
than intended (and may be null). Exactly how these values may differ from those
intended depends on the fault assumption. For example, a crash fault in round r
results in trans p
null for all i, q, and states s whose
round component is r or greater. Notice that although trans p
and msg p
may no
longer be the intended functions, they are still functions; in fact, there is no need
to suppose that the trans p
and msg p
were changed when the fault arrived in round
Systematic Formal Verification for Time-Triggered Algorithms 9
r: since the round counter is part of the state, we can just assume these functions
behave differently than intended when applied to states having round counters
equal or greater than r.
The benefit of this treatment is that, since trans p
and msg p
are uninterpreted,
they can represent any algorithm and any fault behavior; if we can show that
a time-triggered system supplied with arbitrary trans p
and msg p
functions has
the same behavior as the untimed synchronous system supplied with the same
functions, then this demonstration encompasses behavior in the presence of faults
as well as the fault-free case. Furthermore, since we no longer need to hypothesize
that faults can cause differences between those messages sent and those received
(we instead assume the fault is in msg p
and the "different" messages were actually
sent), executions can be simplified from sequences of triples to simple sequences of
states
is the global state at the start of round r. Consequently, to demonstrate
that a time-triggered system implements the behavior specified by an untimed
synchronous system, we simply need to establish that both systems have the same
execution sequences; by mathematical induction, this will reduce to showing that
the global states of the two systems are the same at the start of each round r.
3.2 Time-Triggered Systems
For the time-triggered system, we elaborate the model of the previous section
as follows.
Each processor is supplied with a clock that provides a reasonably accurate
approximation to "real" time. When speaking of clocks, it is usual to distinguish
two notions of time: clocktime, denoted C is the local notion of time supplied by
each processor's clock, while realtime, denoted R is an abstract global quantity. It
is also usual to denote clocktime quantities by upper case Roman or Greek letters,
and realtime quantities by lower case letters.
Formally, processor p's clock is a function C p
C. The intended interpretation
is that C p
(t) is the value of p's clock at realtime t. 2 The clocks of nonfaulty
processors are assumed to be well-behaved in the sense of satisfying the following
assumptions.
Assumption 1 Monotonicity. Nonfaulty clocks are monotonic increasing functions
Satisfying this assumption requires some care in implementation, because clock
synchronization algorithms can make adjustments to clocks that cause them to
2 In the terminology of [22], these are actually "inverse" clocks.
jump backwards. Lamport and Melliar-Smith describe some solutions [22], and
a particularly clever and economical technique for one particular algorithm is introduced
by Torres-Pomales [40] and formally verified by Miner and Johnson [30].
Schmuck and Cristian [38] examine the general case and show that monotonicity
can be achieved with no loss of precision.
Assumption 2 Clock Drift Rate. Nonfaulty clocks drift from realtime at a rate
bounded by a small positive quantity ae (typically
Assumption 3 Clock Synchronization. The clocks of nonfaulty processors are
synchronized within some small clocktime bound \Sigma:
(t)j \Sigma:
Systems.
The feature that characterizes a time-triggered system is that all activity is
driven by a global schedule: a processor performs an action when the time on its
local clock matches that for which the action is scheduled. In our formal model,
the schedule is a function sched is the clocktime at
which round r should begin. The duration of the r'th round is given by
In addition, there are fixed global clocktime constants D and P that give the
offsets into each round when messages are sent, and when the computation phase
begins, respectively. Obviously, we need the following constraint.
Constraint
Notice that the duration of the communication phase is fixed (by P ); it is only the
duration of the computation phase that can differ from one round to another. 3
The states, messages, and channels of a time-triggered system are the same as
those for the corresponding untimed synchronous system, as are the transition
and message functions. In addition, processors have a one-place buffer for each
incoming message channel.
The time-triggered system operates as follows. Initially each processor is in an
initial state, with its round counter zero and its clock synchronized with the others
and initialized so that C p
is the current realtime. All
processors p then repeatedly perform the following two actions.
3 In fact, there is no difficulty in generalizing the treatment to allow the time at which messages
are sent, and the duration of the communication phase, to vary from round to round. That is,
the fixed clocktime constants D and P can be systematically replaced by functions D(r) and
P (r), respectively. This generalization was developed during the mechanized verification; see
Section 3.4.
Systematic Formal Verification for Time-Triggered Algorithms 11
Communication Phase: This begins when the local clock reads sched (r), where
r is the current value of the round counter. Apply the message generation
function msg p
to the current state to determine the messages to be sent on
each outgoing channel. The messages are placed in the channels at local clock
time sched(r)+D. Incoming messages that arrive during the communication
phase (i.e., no later than sched(r) are moved to the corresponding input
buffer where they remain stable through the computation phase. These
buffers are initialized to null at the beginning of each communication phase
and their value is unspecified if more than one message arrives on their associated
communications channel in a given communication phase.
Computation Phase: This begins at local clock time Apply the
state transition function trans p
to the current state and the messages held in
the input buffers to yield the next state. The computation will be complete
at some local clock time earlier than sched(r 1). Increment the round
counter, and wait for the start of the next round.Message transmission in the communication phase is explained as follows. We
use sent(p; q; m; t) to indicate that processor p sent message m to processor q (a
member of out-nbrs(p)) at real time t (which must satisfy C p
for some round r). We use recv(q; to indicate that processor q received
message m from processor p (a member of in-nbrs(q)) at real time t (which must
satisfy the constraint sched(r) C q
round r). These
two events are related as follows.
Assumption 4 Maximum Delay. When p and q are nonfaulty processors,
sent (p; q; m; t) oe recv(q; d)
for some 0 d ffi.
In addition, we require no spontaneous generation of messages (i.e., recv(q;
only if there is a corresponding sent (p; q; m; t 0 )).
Provided there is exactly one recv(q; event for each p in the communication
phase for round r on processor q (as there will be if p is nonfaulty), that
message m is moved into the input buffer associated with p on processor q before
the start of the computation phase for that round and remains there throughout
the phase.
Because the clocks are not perfectly synchronized, it is possible for a message sent
by a processor with a fast clock to arrive while its recipient is still on the previous
round. It is for this reason that we do not send messages until D clocktime units
into the start of the round. In general, we need to ensure that a message from
a processor in round r cannot arrive at its destination before that processor has
started round r, nor after it has finished the communication phase for round r. We
must establish constraints on parameters to ensure these conditions are satisfied.
Now processor p sends its message to processor q, say, at realtime t where C p
and, by the maximum delay assumption, the message will arrive at
We need to be sure that
By clock synchronization, we have jC q
(t)j \Sigma; substituting C p
sched(r) +D we obtain
By the monotonic clocks assumption, this gives
d)
and so the first inequality in (1) can be ensured by
Constraint 2 D \Sigma.
The clock synchronization calculation (2) above also gives
and the clock drift rate assumption gives
from which it follows that
Thus, the second inequality in (1) can be ensured by
Constraint
Faults. We will prove that a time-triggered system satisfying the various assumptions
and constraints identified above achieves the same behavior as an untimed
synchronous system supplied with the same trans p
and msg p
functions. I explained
earlier that faults are assumed to be modeled in the trans p
and msg p
by using the same functions in both the untimed and time-triggered systems, we
ensure that the latter inherits the same fault behavior and any fault-tolerance
properties of the former. Thus, if we have an algorithm that has been shown, in its
untimed formulation, to achieve some fault-tolerance properties (e.g., "this algorithm
resists a single Byzantine fault or two crash faults"), then we may conclude
that the implementation has the same properties.
Systematic Formal Verification for Time-Triggered Algorithms 13
This simple view is somewhat compromised, however, because the time-triggered
system contains a mechanism-time triggering-that is not present in the untimed
system. This mechanism admits faults (notably, loss of clock synchronization) that
do not arise in the untimed system. An implementation must ensure that such
faults are either masked, or are transformed in such a way that their manifestations
are accurately modeled by perturbations in the trans p
and msg p
functions.
In general, it is desirable to transform low-level faults (i.e., those outside the
model considered here) into the simplest (most easily tolerated) fault class for the
algorithm concerned. If no low-level mechanism for dealing with loss of clock synchronization
is present, then synchronization faults may manifest themselves as
arbitrary, Byzantine faults to the abstract algorithm. For example, if one pro-
cessor's clock drifts to such an extent that it is in the wrong round, then it will
execute the transition and message functions appropriate to that round and will
supply systematically incorrect messages to the other processors. This could easily
appear as Byzantine behavior at the level of the untimed synchronous algorithm.
For this reason, it is desirable to include the round number in messages, so that
those from the wrong round can be rejected (thereby reducing the fault manifestation
to fail-silence). TTP goes further and includes all critical state information
(operating mode, time, and group membership) in its messages as part of the CRC
calculation [21].
Less drastic clock skews may leave a processor in the right round, but sending
messages at the wrong time, so that they arrive during the computation phases of
the other (correct) processors. It is partly to counter this fault mode that the time-triggered
model used here explicitly moves messages from their input channels to an
input buffer during the communication phase: this shields the receiving processor
from any changes in channel contents during the computation phase.
If the physical implementation of the time triggered system multiplexes its communications
channels onto shared buses, then it is necessary to control the "bab-
bling idiot" fault mode where a faulty processor disrupts the communications of
other processors by speaking out of turn. In practice, this is controlled by a Bus
Interface Unit (BIU) that only grants access to the bus at appropriate times. For
example, in SAFEbus, processors are paired, with each member of a pair controlling
the other's BIU; in TTP, the BIU has independent knowledge of the schedule.
In both cases, babbling can occur only if there are undetected double failures.
3.3 Verification
We now need to show that a time-triggered system achieves the same behavior
as its corresponding untimed synchronous system. We do this in the traditional
way by establishing a simulation relationship between the states of an execution
of the time-triggered system and those of the corresponding untimed execution.
It is usually necessary to invent an "abstraction function" to relate the states of
an implementation to those of its specification; here, however, the states of the
two systems are the same, and the only difficult point is to select the moments in
time at which states of the time-triggered system should correspond to those of
the untimed system.
The untimed system makes progress in discrete global steps: all component
processors perform their communication and computation phases in lockstep, so
it is possible to speak of the complete system being in a round r. The processors
of the time-triggered system, however, progress separately at a rate governed by
their internal clocks, which are imperfectly synchronized, so that one processor
may still be on round r while another has moved on to round r + 1. We need to
establish some consistent "cut" through the time-triggered system that provides a
global state in which all processors are at the same point in the same round. In
some treatments of distributed systems, it is not necessary for the global cut to
correspond to a snapshot of the system at a particular realtime instant: the cut may
be an abstract construction that has no direct realization. In our case, however,
it is natural to assume that the time-triggered system is used in some control
application and that outputs of the individual processors (i.e., some functions
of their states) are used to provide redundant control signals in real time-for
example, a typical application will be one in which the outputs of the processors
are subjected to majority voting, or separately drive some actuator in a "force-
summing" configuration. 4 Consequently, we do want to identify the cut through
the system with its global state at a specific real time instant.
In particular, we need some realtime instant gs(r) that corresponds to the
"global start" of the r'th round. We want this instant to be one in which all
nonfaulty processors have started the r'th round, but have not yet started its
computation phase (when they will change their states).
We can achieve this by defining the global start time gs(r) for round r to be the
realtime when the processor with the slowest clock begins round r. That is, gs(r)
satisfies the following constraints:
and
sched (r) (4)
(intuitively, p is the processor with the slowest clock).
Since the processors are not perfectly synchronized, we need to be sure that they
cannot drift so far apart that some processor q has already reached its computation
phase-or is even on the next round-at gs(r). Thus, we need
By (3) we have C q
plus the clock
synchronization assumption then gives X \Sigma. Now processor q will still be on
4 For example, the outputs of different processors may energize separate coils of a single
solenoid, or multiple hydraulic pistons may be linked to a single shaft (see, e.g., [12, Figure
3.2-2]).
Systematic Formal Verification for Time-Triggered Algorithms 15
round r and in its communication phase provided this is ensured by
the inequality just derived when taken together with Constraint 3.
We now wish to establish that the global state of a time-triggered system at time
gs(r) will be the same as that of the corresponding untimed synchronous system
at the start of its r'th round. We denote the global state of the untimed system at
the start of the r'th round by gu(r) (for global untimed ). Global states are simply
arrays of the states of the individual processors, so that the state of processor p
at this point is gu(r)(p). Similarly, the global state of the time-triggered system
at time gs(r) is denoted gt(r) (for global timed ), and the state of its processor p is
gt(r)(p). We can now state and prove the desired result.
Theorem 1 Given the same initial states, the global states of the untimed and
time-triggered systems are the same at the beginning of each round:
Proof: The proof is by induction.
Base case. This is the case systems are then in their initial states
which, by hypothesis, are the same.
Inductive step. We assume the result for r and prove it for r + 1. For the
untimed case, the message inputs q
(p) from processor p received by q in the r'th
round is msg p
(gu(r)(p))(q). 5
By the inductive hypothesis, the global state of processor p in the time-triggered
system at time gs(r) is gu(r)(p) also. Furthermore, processor p is in its communication
phase (ensured by (5)) and has not changed its state since starting the
round. Thus, at local clocktime sched(r) +D, it sends msg p
(gu(r)(p))(q) to q. By
(1), this is received by q while in the communication phase of round r, and transferred
to its input buffer inputs q
(p). Thus, the corresponding processors of the
untimed and time-triggered systems have the same state and input components
when they begin the computation phase of round r. The same state transition
functions trans p
are then applied by the corresponding processors of the two systems
to yield the same values for the corresponding elements of gu(r
completing the inductive proof.5 For the benefit of those not used to reading Curried higher-order function applications, this
is decoded as follows: gu(r)(p) is p's state in round
applied to that
state gives msg p
(gu(r)(p)), which is an array of the messages sent to its outgoing channels; q's
component of that array is msg p
(gu(r)(p))(q).
3.4 Mechanized Verification
The treatment of synchronous and time-triggered systems in Sections 3.1 and 3.2
has been formally specified in the language of the PVS verification system [32], and
the verification of Section 3.3 has been mechanically checked using PVS's theorem
prover. The PVS language is a higher-order logic with subtyping, and formalization
of the semiformal treatment in Sections 3.1 and 3.2 was quite straightforward. The
PVS theorem prover includes decision procedures for integer and real linear arithmetic
and mechanized checking of the calculations in Section 3.3, and the proof
of the Theorem, were also quite straightforward. The complete formalization and
mechanical verification took less than a day, and no errors were discovered. The formal
specification and verification are described in the Appendix; the specification
files themselves are available at URL http://www.csl.sri.com/dcca97.html.
While it is reassuring to know that the semiformal development withstands
mechanical scrutiny, we have argued previously (for example, [32,36]) that mechanized
formal verification provides several benefits in addition to the "certification"
of proofs. In particular, mechanization supports reliable and inexpensive exploration
of alternative designs, assumptions, and constraints. In this case, I wondered
whether the requirement that messages be sent at the fixed offset D clocktime units
into each round, and that the computation phase begin at the fixed offset P , might
not be unduly restrictive. It was the work of a few minutes to generalize the formal
specification to allow these offsets to become functions of the round, and to adjust
the mechanized proofs. I contend that corresponding revisions to the semiformal
development in Sections 3.2 and 3.3 would take longer than this, and that it would
be difficult to summon the fortitude to scrutinize the revised proofs with the same
care as the originals.
Round-Based Algorithms as Functional Programs
The Theorem of Section 3.3 ensures that synchronous algorithms are correctly
implemented by time-triggered implementations that satisfy the various assump-
tions, constraints, and constructions introduced in the previous section. The next
(though logically preceding) step is to ask how one might verify properties of a
particular algorithm expressed as an untimed synchronous system.
Although simpler than its time-triggered implementation, the specification of
an algorithm as a synchronous system is not especially convenient for formal (and
particularly mechanized) verification because it requires reasoning about attributes
of imperative programs: explicit state and control. It is generally easier to verify
rather than imperative, programs because these represent state and
control in an applicative manner that can be expressed directly in conventional
logic.
There is a fairly systematic transformation between synchronous systems and
functional programs that can ease the verification task by allowing it to be performed
on a functional program. I illustrate the idea (which comes from Bevier
Systematic Formal Verification for Time-Triggered Algorithms 17
and Young [2]) using the OM(1) algorithm from Section 2. Because that algorithm
has already been introduced as a synchronous system, I will illustrate its transformation
to a functional program; once the technique becomes familiar, it is easy to
perform the transformation in the other direction.
We begin by introducing a function send(r; v; p; q) to represent the sending of a
message with value v from processor p to processor q in round r. The value of the
function is the message received by q. If p and q are nonfaulty, then this value is
v:
otherwise it depends on the fault modes considered (in the Byzantine case it is left
entirely unconstrained, as here).
If T represents the transmitter, v its value, and q an arbitrary receiver, then the
communication phase of the first round of OM(1) is represented by
The computation phase of this round simply moves the messages received into the
states of the processors concerned, and can be ignored in the functional treatment
(though see Footnote 6).
In the communication phase of the second round, each processor q sends the
value received in the first round (i.e., send(0; v; T; q)) on to the other receivers. If
p is one such receiver, then this is described by the functional composition
In the computation phase for the second round, processor p gathers all the messages
received in the communication phase and subjects them to majority voting. 6 Now
(6) represents the value p receives from q, so we need to gather together in some
way the values in the messages p receives from all the other receivers q, and use that
combination as an argument to the majority vote function. How this "gathering
together" is represented will depend on the resources of the specification language
and logic concerned: in the treatment using the Boyer-Moore logic, for example,
it is represented by a list of values [2]. In a higher-order logic such as PVS [32],
however, it can be represented by a function, specified as a -abstraction:
(i.e., a function that, when applied to q, returns the value that p received from q).
Majority voting is represented by a function maj that takes two arguments: the
"participants" in the vote, and a function over those participants that returns the
6 In the formulation of the algorithm as a synchronous system, p votes on the messages from
the other receivers, and the message that it received directly from the transmitter, which it has
saved in its state. In the functional treatment, q includes itself among the recipients of the
message that it sends in the communication phase of the second round, and so the vote is simply
over messages received in that round.
value associated with each of them. The function maj returns the majority value
if one exists; otherwise some functionally determined value. (This behavior can
either be specified axiomatically, or defined constructively using an algorithm such
as Boyer and Moore's linear time MJRTY [4].) Thus, p's decision in the computation
phase of the second round is represented by
where rcvrs is the set of all receiver processors. We can use this formula as the
definition for a higher-order function OM1(T; v) whose value is a function that
gives the decision reached by each receiver p when the (possibly faulty) transmitter
T sends the value
The properties required of this algorithm are the following, provided the number
of receivers is three or more, and at most one processor is faulty:
Definition (7) and the requirements for Agreement and Validity stated above are
acceptable as specifications to PVS almost as given (PVS requires we be a little
more explicit about the types and quantification involved). Using a constructive
definition for maj, PVS can prove Agreement and Validity for a specific number
of processors (e.g., completely automatically. For the general case of n 4
processors, PVS is able to prove Agreement with only a single user-supplied proof
directive, while Validity requires half a dozen (the only one requiring "insight" is
a case-split on whether the transmitter is faulty).
Not all synchronous systems can be so easily transformed into a recursive func-
tion, nor can their properties always be formally verified so easily. Nonetheless,
I believe the approach has promise for many algorithms of practical interest. A
similar method has been advocated by Florin, G'omez, and Lavall'ee [14].
5 Conclusion
Many round-based fault-tolerant algorithms can be formulated as synchronous
systems. I have shown that synchronous systems can be implemented as time-triggered
systems and have proved that, provided care is taken with fault modes,
the correctness and fault-tolerance properties of an algorithm expressed as a synchronous
system are inherited by its time-triggered implementation. The proof
identifies necessary timing constraints and is independent of the particular algorithm
concerned; it provides a more general and abstract treatment of the analysis
Systematic Formal Verification for Time-Triggered Algorithms 19
performed for a particular system by Di Vito and Butler [5]. The relative simplicity
of the proof supports the argument that time-triggered systems allow for
straightforward analysis and should be preferred in critical applications for that
reason [20].
I have also shown, by example, how a round-based algorithm formulated as a
synchronous system can be transformed into a functional "program" in a specification
logic, where its properties can be verified more easily, and more mechanically.
Systematic transformations of fault-tolerant algorithms from functional programs
to synchronous systems to time-triggered implementations provides a
methodology that can significantly ease the specification and assurance of critical
fault-tolerant systems. In current work, we are applying the methodology to
some of the algorithms of TTP [21].
Acknowledgments
Discussions with N. Shankar and advice from Joseph Sifakis were instrumental
in the development of this work. Comments by the anonymous referees improved
the presentation.
--R
ARINC Specification 659: Backplane Data Bus.
The design and proof of correctness of a fault-tolerant circuit
On the impossibility of group membership.
Reaching agreement on processor-group membership in synchronous distributed systems
Understanding fault-tolerant distributed systems
Atomic broadcast: From simple message diffusion to Byzantine agreement.
Di Vito and
The General Dynamics Case Study on the F16 Fly-by-Wire Flight Control System
Impossibility of distributed consensus with one faulty process.
Systematic building of a distributed recursive algorithm.
Group membership protocol: Specification and verification.
The concepts and technologies of dependable and real-time computer systems for Shinkansen train control
SAFEbus TM
Fault Tolerant Computing Symposium 25: Highlights from 25 Years
Should responsive systems be event-triggered or time-triggered? IEICE Transactions on Information and Systems
Synchronizing clocks in the presence of faults.
Specifying and verifying fault-tolerant systems
The Byzantine Generals problem.
Formally verified algorithms for diagnosis of manifest
Formal verification of an algorithm for interactive consistency under a hybrid fault model.
A formally verified algorithm for interactive consistency under a hybrid fault model.
Formal verification of an interactive consistency algorithm for the Draper FTP architecture under a hybrid fault model.
Distributed Algorithms.
Verification of an optimized fault-tolerant clock synchronization circuit: A case study exploring the boundary between formal reasoning systems
Integrated modular avionics for next-generation commercial airplanes
Formal verification for fault-tolerant architectures: Prolegomena to the design of PVS
Reaching agreement in the presence of faults.
Formal verification of an Oral Messages algorithm for interactive consistency.
A fault-masking and transient-recovery model for digital flight-control systems
A formally verified algorithm for clock synchronization under a hybrid fault model.
A less elementary tutorial for the PVS specification and verification system.
Continuous clock amortization need not affect the precision of a clock synchronization algorithm.
Boeing's seventh wonder.
An optimized implementation of a fault-tolerant clock synchronization circuit
Formal Techniques in Real-Time and Fault-Tolerant Systems
Comparing verification systems: Interactive Consistency in ACL2.
Formal specification and compositional verification of an atomic broadcast protocol.
--TR
--CTR
Clara Benac Earle , Lars-ke Fredlund , John Derrick, Verifying fault-tolerant Erlang programs, Proceedings of the 2005 ACM SIGPLAN workshop on Erlang, September 26-28, 2005, Tallinn, Estonia
Christoph Kreitz, Building reliable, high-performance networks with the Nuprl proof development system, Journal of Functional Programming, v.14 n.1, p.21-68, January 2004
Faith Fich , Eric Ruppert, Hundreds of impossibility results for distributed computing, Distributed Computing, v.16 n.2-3, p.121-163, September | formal verification;formal methods;time-triggered algorithms;synchronous systems;PVS |
325511 | The Riemann Zeros and Eigenvalue Asymptotics. | Comparison between formulae for the counting functions of the heights tn of the Riemann zeros and of semiclassical quantum eigenvalues En suggests that the tn are eigenvalues of an (unknown) hermitean operator H, obtained by quantizing a classical dynamical system with hamiltonian Hcl. Many features of Hcl are provided by the analogy; for example, the "Riemann dynamics" should be chaotic and have periodic orbits whose periods are multiples of logarithms of prime numbers. Statistics of the tn have a similar structure to those of the semiclassical En; in particular, they display random-matrix universality at short range, and nonuniversal behaviour over longer ranges. Very refined features of the statistics of the tn can be computed accurately from formulae with quantum analogues. The Riemann-Siegel formula for the zeta function is described in detail. Its interpretation as a relation between long and short periodic orbits gives further insights into the quantum spectral fluctuations. We speculate that the Riemann dynamics is related to the trajectories generated by the classical hamiltonian Hcl=XP. | log x x (x2 1) x log x xImtn
Retn>0
(see section 1.18 of [7]). Here the numbers tn in the oscillatory contributions are
related to the complex Riemann zeros, dened as follows.
Riemann's zeta function, depending on the complex variable s, is dened as
Y X1
ns (Res>1)
and by analytic continuation elsewhere in the s plane. It is known that the complex
zeros (i.e., those with nonzero imaginary part) of (s) lie in the \critical strip" 0 <
Res<1, and the Riemann hypothesis states that in fact all these zeros lie on the
\critical line" Res =1=2 (see Figure 1). The numbers tn in (1.4) are dened by
(Retn =0):If the Riemann hypothesis is true, all the (innitely many) tn are real, and are the
heights of the zeros above the real s axis. It is known by computation that the rst
1,500,000,001 complex zeros lie on the line [9], as do more than one-third of all of
them [10].
Each term in the sum in (1.4) describes an oscillatory contribution to the uctuations
of the density of primes, with larger Retn corresponding to higher frequencies.
M. V. BERRY AND J. P. KEATING
Fig. 1 Complex s plane, showing the critical strip (shaded) and the complex Riemann zeros (there
are trivial zeros at
Because of the logarithmic dependence, each oscillation gets slower as x increases.
This slowing-down can be eliminated by the change of variable
Retn>0
If the Riemann hypothesis is true, constructed
from the primes, has a discrete spectrum; that is, the support of its Fourier
transform is discrete. If the Riemann hypothesis is false, this is not the case. The frequencies
tn are reminiscent of the decomposition of a musical sound into its constituent
harmonics. Therefore there is a sense in which we can give a one-line nontechnical
statement of the Riemann hypothesis: \The primes have music in them."
However, readers are cautioned against thinking that it would be easy to hear
this prime music by constructing f(u) as dened in (1.7) and then converting it into
an audio signal. In order for the human ear to hear the lowest Riemann zero, with
t1 =14:13 :::; it would be necessary to play N 100 periods of cos(t1u), requiring
primes in the range 0 <x<exp(2N=t1) exp(45) 1019.
On this acoustic analogy, the heights tn (hereinafter referred to simply as \the
are frequencies. This raises the compelling question: frequencies of what?
A natural answer would be: frequencies of some vibrating system. Mathematically,
such frequencies|real numbers|are discrete eigenvalues of a self-adjoint (hermitean)
operator. That the search for such an operator might be a fruitful route to proving the
Riemann hypothesis is an old idea, going back at least to Hilbert and Polya [7]; what
is new is the physical interpretation of this operator and the detailed information now
available about it.
The mathematics of almost all eigenvalue problems encountered in wave physics
is essentially the same, but the richest source of such problems is quantum mechan-
ics, where the eigenvalues are the energies of stationary states (\levels"), rather than
frequencies as in acoustics or optics, and the operator is the hamiltonian. Reecting
this catholicity of context, we will refer to the tn interchangeably as energies or
frequencies, and the operator as H (Hilbert, Hermite, Hamilton:::).
To help readers navigate through this review, here is a brief description of the
sections. In section 2 we describe the basis of the Riemann-quantum analogy, which
is an identication of the periodic orbits in the conjectured dynamics underlying the
Riemann zeros, made by comparing formulae for the counting functions of the tn and
of asymptotic quantum eigenvalues. Section 3 explains the signicance of the long
periodic orbits in giving rise to universal (that is, system-independent) behaviour in
classical and semiclassical mechanics and, by analogy, the Riemann zeros. The application
of these ideas to the statistics of the zeros and quantum eigenvalues is taken
up in section 4. Section 5 is a description of a powerful method for calculating the tn
(the Riemann-Siegel formula), with a physical interpretation in terms of resurgence
of long periodic orbits that implies new interpretations of the periodic-orbit sum for
quantum spectra. The properties of the conjectured dynamical system are listed in
section 6, where it is speculated that the zeros are eigenvalues of some quantization
of the dynamics generated by the hamiltonian
2. The Analogy. The basis of the analogy is a formal similarity between representations
for the uctuations of the counting functions for the Riemann zeros tn and
for vibration frequencies associated with a system whose rays are chaotic. For the tn
(assumed real), the counting function is dened for t>0as
where denotes the unit step. Central to our arguments is the fact that N(t) can
be decomposed as follows [11]:
where
and
(The branch of the logarithm is chosen to be continuous, with N(0)=0:)
These two components can be interpreted as the smooth and uctuating parts of
the counting function. Here and hereinafter the notation hidenotes a local average
of a uctuating quantity, over a range large compared with the length scales of the
uctuations but small compared with any secular variation. Implicit in such averaging
is an asymptotic parameter; in the present case this is t, and the averaging range is
large compared with the mean spacing of the zeros but small compared with t itself.
The formula for hNi can be obtained from the functional equation for (s) [7]. It
follows by dierentiating the last member of (2.3) that the asymptotic density of the
zeros is
M. V. BERRY AND J. P. KEATING
Fig. 2 Thick line: Divergent series (2.6) for the counting function uctuations N of the Riemann
zeros, including all values of m and the rst 50 primes p. Thin line: Exact calculation of
N from (2.4).
and therefore that the mean spacing between the zeros decreases logarithmically with
increasing t. Underlying the formula for N are the observations that the phase of a
function jumps by on passing close to a zero, and that
that between the jumps in N this function varies smoothly, implying that its average
value is zero.
Now we substitute into (2.4) the Euler product (1.5), disregarding the fact that
this does not converge in the critical strip, and obtain the divergent but formally
exact expression
This formula gives the uctuations as a series of oscillatory contributions, each labelled
by a prime p and an integer m, corresponding to the prime power pm. Terms with
m>1 are exponentially smaller than those with 1. The oscillation corresponding
to p has a \wavelength" (that is,
log p
In order to discriminate individual zeros, suciently many terms must be included
in the sum for this wavelength to be less than the mean spacing; from (2.5), this gives
p<t=2. When truncated in this way, the sum (2.6) can reproduce the jumps quite
accurately for low-lying zeros, as Figure 2 shows, even though the complete sum
diverges.
Consider now a classical dynamical system [12] in a conguration space with D
freedoms, coordinates fp1;:::;pDg. Trajectories
are generated by a hamiltonian function H(q, p) on the two-dimensional phase space
fq, pg, whose conserved value is the energy E. In quantum physics, q and p are
operators, with commutation relation [q, p]=i~, where ~ h=2 is Planck's con-
stant. Then H(q, p), augmented by boundary conditions, becomes a hermitean wave
operator, whose eigenvalues, discrete if the system is bound, are the quantum energy
levels En. More generally, this formalism applies to any wave system (e.g., water
waves [13]) with coordinates q and wavenumber k, dened by a dispersion relation
!(q, k), the connection between the quantum and wave formalisms being
Familiar wave equations appear when the commutation relations are implemented
with Hamilton's equations are the corresponding ray equations (in
optics these are the rays generated by Snell's law or Fermat's principle). For example,
a locally uniform medium (H independent of q) with impenetrable walls corresponds
to \quantum billiards," where waves are governed by the Helmholtz equation with
Dirichlet boundary conditions, and the (straight) rays are reected specularly at the
walls [14]. Of special interest to us is the asymptotics of the eigenvalues En in the
semiclassical limit ~ ! 0, which from (2.8) is equivalent to the short-wavelength or
high-frequency limit.
Waves, in particular the eigenfunctions of H, usually depend not on individual
trajectories but on families of trajectories, whose global structure is an important
determinant of the energy-level asymptotics. Of interest here is the case where the
trajectories are chaotic [15, 16, 17], that is, where E is the only globally conserved
quantity and neighbouring trajectories diverge exponentially. Then on a given energy
shell (that is, for given E), the usual structure|and the one we will consider
here|is that all initial conditions generate trajectories that explore the (2D 1)-
dimensional energy surface ergodically, except for a set, dense but of zero measure, of
(one-dimensional) isolated unstable periodic orbits.
An important result of modern mathematical physics, central to the Riemann-
quantum analogy, is that these isolated periodic trajectories determine the uctuations
in the counting function N(E) of the energy levels [18, 19, 20, 21]. Using the notation
(2.2), with E replacing t, we can separate N(E) into its smooth and uctuating parts
hN(E)i and N(E). The averaging is over an energy interval large compared with
the mean level spacing but classically small, that is, vanishing with ~. We state the
formula for N(E) and then explain it:
The symbol indicates that the formula applies asymptotically, that is, for small
~. (In the special case of the Selberg trace formula [21], corresponding to waves on
a compact surface of constant negative curvature, the formula is exact.) The index
labels primitive periodic orbits, that is, orbits traversed once. The index m labels
their repetitions. Therefore, the two sums together include all periodic orbits. Sp(E)
is the action of the primitive orbit p, that is,
I
In terms of Sp, the period of the orbit is
The hyperbolic symplectic matrix Mp (the monodromy matrix) describes the exponential
growth of deviations from p of nearby (linearized) trajectories, between successive
M. V. BERRY AND J. P. KEATING
crossings of a Poincare surface of section transverse to p. p is the Maslov phase, determined
[22] by the winding round p of the stable and unstable manifolds containing
the orbit.
Physically, the appearance of periodic orbits is not surprising. The levels En,
counted by N, are associated with stationary states, that is, states or modes that are
time-independent. By the correspondence principle, their asymptotics should depend
on phase space structures unchanged by evolution along rays, that is, the invariant
manifolds with energy E. In the type of chaotic dynamics we are considering, there are
two types of invariant manifold: the whole energy surface, which determines hN(E)i
as we will see, and, decorating this, the tracery of periodic orbits, which determines
the ner details of the spectrum as embodied in the uctuations N(E).
For long orbits, the determinant is dominated by its expanding eigenvalues, and,
for large Tp,
where p is the Liapunov (instability) exponent of the orbit p. Thus, approximately,
Now we can make the formal analogy with the corresponding formula (2.6) for the
counting function uctuations of the Riemann zeros:
Quantum Riemann
Dimensionless
actions
mt log p
~
Periods mTp m log
Stabilities 12 pTp 21 log
Asymptotics
The nonappearance of ~ on the \Riemann" side indicates that the dynamical system
underlying the zeros is scaling, in the sense that the trajectories are the same for all
\energies" t, as in the most familiar scaling system, namely, quantum billiards, where,
for a particle of mass m, energy scales according to the combination
and, for an orbit of length Lp, the analogy, primes acquire a new
signicance, as primitive periodic orbits, whose periods are logp. The index m in (2.6)
then labels their repetitions.
The fact that all orbits have the same instability exponent (unity) indicates that
the Riemann dynamics is homogeneously unstable, that is, uniformly chaotic. More-
over, the dynamics does not possess time-reversal symmetry. If it did, degeneracy
of actions between each orbit and its time-reversed partner would lead to their contributing
coherently to N(t), so that for most orbits (those that are not self-retracing)
the prefactor in (2.6) would be 2= rather than 1=.
An alternative form of the periodic-orbit sum (2.9), which will be useful later, is
in terms of the level density
d N (E)
Denoting primitive and repeated periodic orbits by the common index j (= fp; mg),
we can write
~
where for convenience we have absorbed the Maslov indices into the actions, and the
amplitude Aj is
as ~ ! 0. For the Riemann zeros, the corresponding formula, from (2.6) and (2.14),
has j =0,
log
and is an identity rather than an asymptotic approximation.
There are two discordant features of the analogy [1], to which we will return. First,
the exponential decay of long orbits in the quantum formula (2.13) is an approximation
to the determinant in (2.9), whereas for the Riemann zeros the exponential in (2.6)
is exact. Second, the negative sign in (2.6) indicates that when the Maslov phases
mp=2 are reinstated in (2.13) their value should be for all orbits, but this is hard
to understand because if the index is for a given orbit it should be 2 for the same
orbit traversed twice.
The smooth part hN(E)i of the counting function is, to leading order in ~, the
number of phase space quantum cells (volume hD) in the volume (E) of the energy
For billiards, is proportional to the
spatial volume conning the system (this is Weyl's asymptotics [23]). The mean level
density is thus
In the quantum formula (2.13), each orbit contributes an oscillation to N(E),
with energy \wavelength" (cf. (2.7))
This should be compared with the mean spacing of the eigenvalues, which is the
reciprocal of the mean level density and so (from (2.19)) of order ~D. An important
implication is that the oscillation contributed by a given orbit has, asymptotically, a
wavelength much larger than the mean level spacing. Thus in order to have a chance
of resolving individual levels it is necessary to include at least all those orbits with
periods up to
This evokes the time-energy uncertainty relation, so TH is called the Heisenberg time.
Asymptotically, TH corresponds to very long orbits, or, in the Riemann case, large
primes pH(t)=t=2 (cf. the discussion following (2.7)). In what follows, this emphasis
on long orbits will play a key role.
M. V. BERRY AND J. P. KEATING
3. Long Orbits and Universality. In a classically chaotic system, the periodic
orbits proliferate exponentially as their period increases [24], with density
number of orbits with periods between T and T
dT
exp (T)
as T !1:
Here, is the topological entropy of the system. In the cases we are interested in,
can be identied with a suitable average of the instability exponents of long periodic
orbits (cf. (2.12)). In the Riemann case, where according to (2.14) the periodic orbits
correspond to primes, (3.1) nicely reproduces the prime number theorem (1.2) and
thereby reinforces the analogy (the repetitions, labelled by m, give exponentially
smaller corrections).
From (2.18), the proliferation in (3.1) cancels the decay of the intensities A2 for
long orbits. One way to write this is
(3.2) lim A2j (T Tj)=1:
This is the sum rule of Hannay and Ozorio de Almeida [25]. Its importance is threefold:
rst, it does not contain ~ and so is a classical sum rule. Second, the amplitudes Aj
nevertheless have signicance in quantum (i.e., wave) asymptotics, because they give
the strengths of the contributions to spectral density uctuations. Third, the rule is
universal: (3.2) contains no specic feature of the dynamics|it holds for all systems
that are ergodic. One way to appreciate the naturalness of this universality is to
imagine that a long orbit with energy E, inscribed on the constant-energy surface
forms an intricate tracery that, with the slightest smoothing, could cover
the surface uniformly with respect to the microcanonical (Liouville) measure. This
\phase-space democracy" is the basis of Hannay and Ozorio de Almeida's derivation.
Expressed mathematically, this ergodicity-related sum rule corresponds to an
(associated with the invariant measure) of the Perron{Frobenius
operator that generates the classical ow in phase space. Equivalently [26, 27], it
corresponds to a simple pole at of the dynamical zeta function D(s), dened
(for two-dimensional systems, for example) by
where p is the larger eigenvalue (jpj > 1) of the monodromy matrix M. The rest of
the spectrum of the Perron{Frobenius operator, or equivalently the analytic structure
of D(s) away from determines the rate of approach to ergodicity|that is, it
is related to the system-specic short-time dynamics.
Now recall that according to (2.21) the long orbits determine spectral uctuations
on the scale of the mean level separation. The universality of the classical sum rule
suggests that the spectral uctuations should also show universality on this scale.
And by the Riemann-quantum analogy, we expect this spectral universality to extend
to the Riemann zeros tn.
It is in the statistics of the levels and Riemann zeros that the universality appears.
This is to be expected, since ergodicity is a statistical property of long orbits.
It is important to note that we are here considering individual systems and not
ensembles, so statistics cannot be dened in the usual way, as ensemble averages.
Instead, we rely on the presence of an asymptotic parameter (see the remarks after
(2.4), and before (2.9)): high in the spectrum (or for large t in the Riemann case),
there are many levels (or zeros) in a range where there is no secular variation, and it
is this large number that enables averages to be performed. Universality then emerges
in the limit ~ ! 0 (or t !1) for correlations between xed numbers of levels or
zeros.
A mathematical theory of universal spectral uctuations already exists in the more
conventional context where statistics are dened by averaging over an ensemble. This
is random-matrix theory [28, 29, 30, 31, 32], where the correlations between matrix
eigenvalues are calculated by averaging over ensembles of matrices whose elements
are randomly distributed, in the limit where the dimension of the matrices tends
to innity. Here the relevant ensemble is that of complex hermitean matrices: the
\Gaussian unitary ensemble" (GUE). As will be discussed in the next section, it is
precisely these statistics that apply to high eigenvalues of individual chaotic systems
without time-reversal symmetry, and also to high Riemann zeros, in the sense that
the spectral or Riemann-zero averages described in the previous paragraph coincide
with GUE averages.
First, however, we give a very simple argument [33] showing that the approach to
universality must be nonuniform. The classical sum rule (3.2) applies to long orbits
but not to short ones, because these will reect the specic dynamics of the system
whose spectrum is being considered. Therefore, spectral features that depend on short
orbits can be expected to be nonuniversal. From (2.20), these are uctuations on the
energy scale is the period of the shortest orbit. This scale is
asymptotically small but still large compared with the separation of order hD between
neighbouring eigenvalues. On this basis, we expect universality to be a good approximation
for correlations between eigenvalues separated by up to O(1=h(D1)) mean
spacings, but not for larger separations. For the Riemann zeros,
tion (2.14)), whereas the mean separation between zeros is 2=log(t=2). Therefore
universality for zeros near t should break down beyond about log(t=2)= log 2 mean
spacings. We regard the observation of the breakdown of random-matrix universality
for the Riemann zeros [34], in accordance with this prediction, as giving powerful
support to the analogy with quantum or wave eigenvalues.
4. Periodic-Orbit Theory for Spectral Statistics. In discussing statistics, it will
be simplest to measure intervals between eigenvalues or Riemann zeros in units of the
local mean spacing. We denote such intervals by x, and the corresponding levels or
zeros, referred to a local origin, by xn; in these units, 1. We will mainly
be concerned with statistics that are bilinear in the level density, the simplest being
the pair correlation of the density uctuations, dened in [31], in the sense of a
distribution, as
R (x; y) probability density of separations x of levels or zeros
close to a scaled position y
(In the second member, the sum is over a stretch of N levels near y, with N>1.) R
gives the correlation between levels near E, or, correspondingly, Riemann zeros near
M. V. BERRY AND J. P. KEATING
t; for simplicity of notation, we will henceforth not indicate these base levels (denoted
y in (4.1)).
Closely related to R is the form factor K() (the name comes from crystallogra-
phy), dened as
(R
where the sum is as in (4.1). Here the variable (conjugate to x) is the scaled time
TH
where TH is the Heisenberg time (equation (2.21)). With the denitions given, both
R and K tend to 1 at long range; the term (x)inR ensures that this requirement is
compatible with (4.2).
Other statistics that are bilinear in d can be expressed in terms of K or R.A
useful one is the number variance:
variance of number of levels or zeros in
anDinterval w1here the mean1 number2iEs x
x
correlation function (4.1) is determined by the spectral density uctuations,
for which there is the semiclassical formula (2.16). Our aim in this section is to explain
how to employ this observation to calculate these bilinear statistics, obtaining not only
the universal random-matrix limit but also the corrections to this corresponding to
large eigenvalue or zero separations, or short times. The argument is subtle and has
several levels of renement, of which we start with the simplest [3, 5, 33].
We will calculate K(). The rst step is to substitute (2.16) into (4.1), thereby
obtaining a double sum over periodic orbits. Since all the actions are positive, we can
simplify the averages (over a small interval of eigenvalues or along the critical line)
using
The dimensionless intervals x that we will be considering may be large but must
correspond to classically small energy ranges, so we can approximate the actions
using
x xTj x2
where Sj, Tj, and d are evaluated at E0. Elementary manipulations, and evaluating
the integral in (4.2), give the asymptotic (that is, small-~) form factor as the double
sum
It is convenient now to consider separately the diagonal part Kdiag of the sum
(terms with and the o-diagonal part Ko (terms with k). For Kdiag,wehave
Kdiag ()=
In the limit ~ ! 0, xed, the sum over orbits can be evaluated using the Hannay-
Ozorio sum rule (3.2), giving
This is universal: all details of the specic dynamics have disappeared. Because of the
Riemann-quantum analogy, the same behaviour should hold for the pair correlation
of the Riemann zeros. Here we make contact with the seminal work of Montgomery
[35], who indeed proved (4.9) in that case.
Now we observe that in random-matrix theory the exact form factor of the GUE
is
( is the unit step.) For later reference, the GUE pair distribution function, obtained
from (4.2), issin (x)
x
Evidently the approximation (4.9), based on periodic orbits, captures exactly the
random-matrix behaviour for jj < 1, without invoking any random matrices. This
led Montgomery [35] to conjecture (following a suggestion of Dyson and independently
of any semiclassical argument) that for the Riemann zeros K()=KGUE()inthe
limit t !1.
Clearly, (4.9) does not give the random-matrix result when jj > 1. Indeed
it fails drastically by not satisfying the requirement, necessary for any form factor
representing a discrete set of points (eigenvalues or zeros), that K() ! 1as !1.
This failure reects the importance of Ko, and implies that for large (long orbits)
the o-diagonal terms in the double sum (4.7) cannot vanish through incoherence,
as might naively be thought, but must conspire by destructive coherent interference
to cancel the term from Kdiag and replace it by 1. This is consistent with the
Montgomery conjecture, which implies
in the limit t !1.
One reason why Kdiag alone is inadequate is the proliferation of orbits: for sufciently
long times, there will be many pairs of orbits whose actions dier by less
M. V. BERRY AND J. P. KEATING
than ~, so that they cannot be regarded as incoherent in (4.7). This phenomenon,
that in some appropriate sense the large- limit of the double sum must be 1, is the
semiclassical sum rule. Originally [33] the rule was obtained by a dierent argument,
and was mysterious. Now there is a better understanding of the mechanism by which
the cancellation occurs [36, 37]; we will discuss it later.
Indeed, for the Riemann zeros, (4.12) can be derived [4] using a conjecture of
Hardy and Littlewood [38] concerning the pair distribution of the prime numbers.
These correlations are important because if the logarithms of the primes (primitive
orbit periods) were pairwise uncorrelated, Ko, being the average of a sum of random
phases, would be zero. The Hardy-Littlewood conjecture is that 2(k; X), dened
as the number of primes p X such that p + k is also a prime, has the following
asymptotic form for large X:
log X
with> 0ifk is odd
pjk
where the q-product includes all odd primes, and the p-product includes all odd
prime divisors of k. Pairwise randomness would correspond to C(k)=1. Itcanbe
demonstrated [4] that as K !1
and so on average(4.16) C
for large k. This in turn was shown to imply (4.12) in the limit t !1[4].
We have seen that Kdiag is universal in the limit ~ ! 0; that is, it is independent of
the specic features of the dynamics. These reappear|in a dramatically nonuniform
way|in the approach to the limit. To see this, note rst that it is only for short orbits,
that is, when <1, that universality breaks down. Next, choose a corresponding
to a time much longer than the shortest period T0 and shorter than the Heisenberg
time TH, that is,
We continue to use the Hannay-Ozorio sum rule for >, the limit (4.12) for Ko
ensuring the correct GUE formula (4.10) for >1, but take the contributions from
orbits with period Tj < 2hdi~ directly from (4.8). Thus
Fig. 3 Number variance 2(x) (4.4) of the Riemann zeros tn near n =P1012, calculated from
(with =1=4), (2.14), and (2.18) (thin line), compared with 2(x) computed from
numerically calculated zeros by Odlyzko [39, 40] (thick line); all the zeros are close to
2:677 1011, and their smoothed density is hdi =3:895 :::: Note the resurgence resonances
(cf. (4.23)) associated with the lowest zeros t1, t2, and t3, and that the theory fails to capture
small, fast oscillations in the data.
is a candidate for a semiclassical formula for the form factor. Later we will see that this
is not quite correct: the proper incorporation of the o-diagonal terms in the double
sum introduces a small but important modication near = 1. For the moment, we
continue to discuss (4.18).
This formula for K(), applied to the Riemann zeros, is extremely accurate. When
employed in conjunction with (4.4) to calculate the number variance of the zeros [34],
it reproduces almost perfectly this statistic as computed from numerical values of high
zeros [39, 40].
Figure
3 shows that the agreement extends from the random-matrix
regime (small x) to the far nonuniversal regime. Note however the tiny oscillatory
deviations; we will return to these later.
For the pair correlation, we have
Remarkably, it is possible to calculate the correction Rc explicitly and in closed form
at this level of approximation. The formula was obtained for both the Riemann zeros
and for general systems in [41], and independently in [42] for the Riemann zeros. From
(4.18), (4.2), and (2.14), we get
R (x)
c c 2( hdi)2 m;p pm hdi
Z
M. V. BERRY AND J. P. KEATING
where hdi is given by (2.15). The sum is insensitive to the value of provided this
is not too small, so we set = 1. Next, we write
m) log p:
This corresponds to separating into contributions from primitive orbits (rst term)
c
and repetitions (second term). In the repetitions, the sum over m can be evaluated
explicitly. For the rst term, we use [7]
Some tricky but elementary manipulations now give
c 2( hdi)2 2
(p exp fi log pg1)2
where
x
hdi:
This formula has a very interesting structure, worth discussing in detail. First,
in the limit t !1for any xed x, and so the pole in the zeta function
cancels the singularity 1=2. Second, the prefactor 1=hdi2 ensures that the correction
Rc1 is asymptotically small in comparison with RGUE (equation (4.11)). Third, the
dependence on shows that R1 involves the separation between zeros in the original
c
variable t =Ims (heights of zeros along the critical line), rather than the scaled
separation x; this means that structural features of R1 appear asymptotically at larger
c
x than the oscillations in RGUE, as expected for nonuniversal features of correlations.
Fourth, the contributions from repetitions (the sum over p in (4.23)) are less signicant
than those from primitive orbits (rst two terms), as Figure 4 shows. Fifth, and most
important, the appearance of (1 i) indicates an astonishing resurgence property
of the zeros: in the pair correlation of high Riemann zeros, the low Riemann zeros
appear as resonances. This is illustrated in Figure 5. The resonances also appear as
peaks in the nonuniversal part of the number variance (Figure 3).
For generic dynamical systems without time-reversal symmetry, it can be veried
directly that the analogue of (4.23) is [41, 43]R1 (x)=
where D is the dynamical zeta function dened in (3.3), and now = x=~hdi. Again,
the pole in the zeta function (now at cancels the singularity 1=2. In this
case, the resonances discussed above are caused by singularities of log D(s) away
from that is, by subdominant eigenvalues of the Perron{Frobenius operator.
Fig. 4 Nonuniversal correction to the pair correlation of the Riemann zeros, calculated from (4.23)
as Rc1 scaled () 2(hdi)2 Rc (x). Parts (a) and (c) include repetitions; (b) and (d) omit
repetitions.
Fig. 5 Pair correlation R(x) of the Riemann zeros, calculated \semiclassically" (thick line) from
and (4.23), for zeros near n =105, and random-matrix behaviour RGUE(x) (thin line);
note the rst nonuniversal resurgence resonance near x =21.
Now we return to the tiny oscillatory deviations noticeable in Figure 3, reecting
small errors in (4.19) and (4.23). These are again associated with the approach to the
t !1limit of the form factor, rather than the limit itself: whereas (4.23) captures
the appropriate large-t asymptotics of Kdiag, the GUE-motivated replacement (4.12)
incorporates only the t !1limit of Ko.
For the Riemann zeta function, this can be corrected as follows. We have already
noted above that the formula (4.12) for Ko can be derived using the smoothed
expression (4.16) for the Hardy-Littlewood conjecture. The large-t asymptotics we
M. V. BERRY AND J. P. KEATING
Fig. 6 Number variance 2(x) (4.4) of the Riemann zeros tn near n =109, calculated from (4.19)
and (4.26), including the o-diagonal correction (4.27){(4.28) [70] (full line), compared with
P(x) computed from numerically calculated zeros by Odlyzko [39, 40] (dots); all the zeros
are close to t =3:719 108, and their smoothed density is hdi =2:848 :::.
seek comes from using the original unsmoothed form (4.14) [5, 41]. The result is that
in which R1 is given by (4.23), and
c
c 2( hdi)2 2
where
(p 1)2
is a convergent product over the primes. As with the diagonal term (cf. the discussion
after (4.24)), convergence as ! 0 is ensured by the pole of the zeta function.
This second correction, although small, does incorporate the small oscillations,
through the trigonometric functions with argument 2x. Asymptotically (that is, as
t !1), these oscillations are fast (cf. (4.23){(4.24)) in comparison with the variations
from the resonances of the zeta function. When employed in conjunction with (4.4),
the correction accurately reproduces the oscillatory deviation (Figure 3) in the number
variance of the zeros; this is illustrated in Figure 6.
Unfortunately, this derivation of (4.27) for the Riemann zeros cannot be imitated
for general chaotic dynamical systems because we have no a priori knowledge
of the correlations between the actions of dierent periodic orbits, analogous to the
Hardy-Littlewood conjecture for the primes. It is possible to get some information by
working backwards and, assuming that the GUE expression (4.10) or (4.11) describes
the pair correlation of eigenvalues in generic chaotic systems without time-reversal
symmetry, deriving the universal limiting form of the implied action correlations [37].
(This procedure essentially follows an analogous derivation for the primes themselves,
assuming the Montgomery conjecture [44]). An interesting feature of this approach
is that it leads to predictions about classical trajectories based on the distribution of
quantum energy levels. However, it gives no information about the deviations from
random-matrix universality that are the focus of our concern here.
Recently a theory has been developed that overcomes these diculties [5, 41]. It
is based on two observations.
First, as already noted above, quantum eigenvalues (or Riemann zeros) are re-solved
by the trace formula if the sum (2.9) over periodic orbits is truncated near the
time TH (this will be made more precise in the next section). Hence, if
the trace formula thus truncated generates the approximation
to the counting function, the quantities E~n dened by
should be good semiclassical approximations to the exact eigenvalues. The theory is
based on calculating the correlations in this approximate spectrum.
Second, the diagonal terms Kdiag() are asymptotically dominant in the form
factor for <1, corresponding to times less than TH. This implies that orbits with
periods less than TH make contributions that are eectively uncorrelated; treating
them in this way allows the correlations in the E~n spectrum to be computed exactly.
For chaotic systems without time-reversal symmetry, the result is that when x>>
1 the deviations from the GUE formula can also be represented in the form (4.26),
(4.25), and (4.27), where replaced by D(i) (dened by (3.3)) and, in
(4.28), b() is replaced by
x=~hdi). Here is the residue of the pole at s =0ofD(s), 21 is the q-
hypergeometric function [45], and is the pth element of the product over primitive
orbits in (3.3).
The formal similarity between the results for the Riemann zeros and for the
semiclassical eigenvalues is striking, and reinforced by the fact that the derivation
of (4.31) just outlined leads precisely to (4.28) when applied to the zeros. Indeed,
by Fourier-transforming (4.26) with respect to t, this can be regarded as a heuristic
derivation of the Hardy-Littlewood conjecture. In the same way, Fourier-transforming
the corresponding result for dynamical systems with respect to 1=~ leads to a classical
periodic orbit correlation function corresponding directly to the Hardy-Littlewood
conjecture and reducing to the universal form conjectured in [37] in the long-time
limit. It is a challenge to derive these correlations within classical mechanics.
We nish this section on connections between statistics of the Riemann zeros
and quantum eigenvalues by remarking that the results for pair correlations extend
to correlations of higher order. Thus Montgomery's conjecture for the two-point
correlation of the Riemann zeros generalizes to all n-point correlations. Specically,
the irreducible n-point correlation function
M. V. BERRY AND J. P. KEATING
tends asymptotically to the corresponding GUE expression:
where the elements Sij of the n n matrix S are given by
sin f (xi xj)g
The analogue of Montgomery's theorem for the diagonal contributions to R~n was
proved for and then for all n 2 [47]. The o-diagonal contributions were
calculated using a generalization of the Hardy-Littlewood conjecture for
and then for all n 2 [49]. In all cases the results conrm the conjecture
(4.33) and (4.34). The nonuniversal deviations from the GUE formulae (4.33){(4.34)
were calculated for using the method outlined above, and take
a form (related to the structure of (s)ass ! 1) directly analogous to that already
discussed. As expected, this extends to the higher order correlations of quantum
eigenvalues.
5. Riemann-Siegel Formulae. A powerful stimulus to the development of analogies
between quantum eigenvalues and the Riemann zeros has been the Riemann-Siegel
formula for (s). As explained in [7], this very eective way of computing the zeros
(especially high ones)|employed in most numerical computations nowadays|was
discovered by Siegel in the 1920s among papers left by Riemann after his death
years earlier. We present the formula in an elementary way, chosen to facilitate our
subsequent exploration of its intricate interplay with quantum mechanics. Riemann's
derivation [11, 50] was dierent, and a remarkable achievement, because although it
was one of the rst applications of his method of steepest descent for integrals it was
more sophisticated than most applications today, in that the saddle about which the
integrand is expanded is accompanied by an innite string of poles.
It is a consequence of the functional equation satised by (s) [11] that the following
function Z(t) is even, and real for real t:
:Here (t) is the function appearing in the smoothed counting function (2.3) for the
zeros. Naive substitution of the Dirichlet series (1.5) gives the formal expression
X1 exp fit log ng
This is doubly unsatisfactory. First, it does not converge|a defect shared with its
relative similarly originating in the inadmissibility of
(1.5) in the critical strip. Second, it is not manifestly real as Z(t) must be.
Both defects can be eliminated by truncating the series (5.2) at a nite
and resumming the tail. The truncation n(t) is chosen to be the term whose phase
(t)t log n is stationary with respect to t; the asymptotic formula for (last member
of (2.3)) gives
A crude resummation [1] using the Poisson summation formula leads to a result equivalent
to the \approximate functional equation" [11]:
ng
This is a remarkable example of resurgence: the resummed terms in the tail n>n(t)
are the complex conjugates of the early terms 1 n n(t), so that the series in
(5.4)|called the \main sum" of the Riemann-Siegel expansion|is real, like the exact
Z(t). The zeros generated by the rst term alone (n = 1), that is, cos (t)=0,
have the correct mean density (cf. (2.3)). Higher terms shift the zeros closer to their
true positions, and introduce the random-matrix uctuations. It is worth mentioning
that the zeros obtained by including successive terms in (5.4) cannot be regarded as
the eigenvalues of hermitean operators that approximate the still-unknown Riemann
operator, because these partial sums of the main sum each have zeros for complex t [6].
Unfortunately, the truncation (5.3) introduces another defect: the sum is a discontinuous
function of t, unlike Z(t), which is analytic. The discontinuities can be
eliminated by formally expanding the dierence between (5.2) and the sum in (5.4)
about the truncation limit N(t), to obtain the correction terms in (5.4). This will depend
on the fractional part of (t=2) as well as its integer part n, so it is convenient
to dene
r
(5.5) a (t) n (t)+ (1 z
The expansion is in powers of 1=a (henceforth we do not write the t-dependences
explicitly), and gives
ng (1)n+1 X1 Cr (z)
ar
This procedure was devised in [4], where it was used to calculate the rst correction
term C0(z), and elaborated in [51] in a study of the higher corrections.
The sum over r is the Riemann-Siegel expansion. Its terms Cr(z) are constructed
from derivatives (up to the 3rth) of
cos fzg
with coecients determined by an explicit recurrence relation involving the coecients
(Bernoulli numbers) in the Stirling expansion of (t) for large t. The next few
coecients are
M. V. BERRY AND J. P. KEATING
(superscripts in brackets denote derivatives). Gabcke [50] calculated Cr(z) for r 12.
Later terms get very complicated; for example,
+106489993378346112059965404000022
713214794639C (z)
50407933481C (z)
An elaborate asymptotic analysis [51] shows that the high orders (\asymptotics
of the asymptotics") can be represented compactly as a \decorated factorial series"
whose terms are
;where for large r
sin f(2m +1) rg cos
cos f(2m +1) rg sin odd)Comparison with numerically computed Cr(z) (up to using special techniques
to evaluate the derivatives of C0(z)) shows that these formulae capture the ne details
of the Riemann-Siegel coecients, even for small r.
The factorial in (5.10) means that the sum over r in (5.6) is divergent in the
manner familiar in asymptotics: the terms get smaller and then diverge. Asymptotics
folkore suggests, and Borel summation (implemented analytically and checked numer-
ically) conrms, that optimal accuracy obtainable from the Riemann-Siegel formula
(without further resummation) corresponds to truncating the sum at the least term.
This has
and the resulting error is of order
ng (1)n+1 Xr Cr (z)
ar
The accuracy is very high: even for the lowest Riemann zero,
expft1g1020. Nevertheless, it is possible to do better, as we shall see later.
Now we turn to the quantum analogues of the Riemann-Siegel formula for classically
chaotic systems with D>1, as envisaged in [1], explored in detail in [52],
and derived in [53]. These studies are motivated by the hope that such an eective
method of computing Riemann zeros might lead to a useful way to calculate quantum
eigenvalues.
First, the counterpart of Z(t) in (5.1) is a function with zeros at the quantum
energy levels En; this is the quantum spectral determinant
where H is the hermitean wave operator (section 2) and the real factor A is introduced
to make the product converge. Hermiticity implies that is real for real E; this
\quantum functional equation" is analogous to the functional equation for (s), which
implies that Z(t) is real for real t.
To nd the counterpart of the Dirichlet series (5.2), we note that the quantum
eigenvalue counting function can be written (cf. (2.4)) as(5.15) N (E)= lim ImTrlog f1
"!0
Now the decomposition into smooth and uctuating parts, together with the periodic-
leads to
Y < X1 exp fimSp (E)
where B(E) is real and nonzero for real E and where we have absorbed the Maslov
indices into S.
Expanding the product over primitive orbits p and the exponential of the sum
over repetitions m, we obtain a series of terms that can be labelled by
Here mp represents the number of repetitions of the orbit p. Each term corresponds
to a sum over actions:
The expansions lead to
M. V. BERRY AND J. P. KEATING
with an explicit form for the coecients Dn that we do not give here [52]. As (5.18)
indicates, the terms n correspond to composite orbits, or pseudo-orbits, consisting of
combinations of repetitions of dierent periodic orbits. We label the composite orbits
so that increasing n corresponds to increasing period
with representing no orbit at all, that is, which the coecient
The sum (5.19) is the counterpart of the Dirichlet series (5.2) for Z(t), with
composite orbits n related to primitive orbits p in the same way that the integers n
are related to the primes p (cf. (1.5)). Moreover (5.19) diverges, like the sum (2.9)
from which it was obtained, and it is not manifestly real as the exact (E) must be.
Our interpretation of the Riemann-Siegel formula suggests a similar resummation of
the tail of the series (5.19) after truncation at the term whose phase is stationary with
respect to E. This term|the counterpart of n(t) in (5.3)|represents the composite
orbit dened by
d
The corresponding period T (E)is
where TH(E) is the Heisenberg time (2.20).
Comparison with the Riemann-Siegel main sum in (5.4) suggests that the sum of
the composite orbits with approximately, the complex conjugate of the
sum of the orbits with In fact, this relation can be derived using arguments
based on analytic continuation with respect to E [53]. These arguments also indicate
a more detailed correspondence: between the sums of groups of terms with periods
X. The resulting \Riemann-Siegel lookalike" formula is
(For a dierent derivation, see [54].)
With (5.23) it is possible to reproduce some low-lying quantum eigenvalues, and
of course the fact that the sum is nite is a major advantage over the innite divergent
series (2.9) and (5.19). However, for a chaotic system with D>1 the number of terms
with exponentially large in 1=~, so the Riemann-Siegel lookalike is not as
useful for calculating high quantum eigenvalues as (5.4) is for calculating Riemann
zeros. The origin of the dierence is the exponential proliferation of periodic orbits
(and composite orbits), together with the fact that hdi increases as 1=~D, whereas for
the Riemann zeros, whose classical counterpart appears to be quasi-one-dimensional,
hdi increases as log t. Moreover, (5.23) is discontinuous at the energies of composite
orbits with period T .
No way has yet been found to implement the obvious suggestion of cancelling the
discontinuities in the quantum formula (5.23) by a series of corrections analogous to
the terms involving Cr(z) in the Riemann-Siegel expansion (5.6). However, a dierent
completion of the Riemann-Siegel main sum was discovered ([55], generalizing an idea
in [4]), that does have a quantum analogue.
In this alternative approach to the resummed Dirichlet series, the abrupt truncation
is replaced by a smoothed cuto involving the complementary error function
and an optimization parameter K. An argument involving analytic continuation in t
leads to
with an explicit expression for the correction terms. With K chosen appropriately, this
smoothed sum can reproduce Z(t) to an accuracy equivalent to that of the Riemann-
Siegel main sum together with several correction terms. The corrections in (5.24) form
an explicit asymptotic series enabling Z(t) to be calculated with an accuracy of order
exp(t2); this improvement over the Riemann-Siegel exp(t) is possible because
(5.24) involves the higher transcendental function Erfc, whereas the Riemann-Siegel
expansion involves only elementary functions. Several related representations of Z(t)
are now known [56, 57, 58].
The improved representation (5.24), together with the explicit correction terms,
can readily be adapted to the quantum spectral determinant. The smoothed version
of the Riemann-Siegel lookalike (5.23) is obtained by an argument involving analytic
continuation with respect to 1=~, leading to
where Ni denotes the ith derivative of N with respect to 1=~. A numerical test
of this formula for the hyperbola billiard (a classically chaotic system with D =2)
shows that it can reproduce quantum eigenvalues with high accuracy, even resolving
near-degenerate pairs of levels [59].
Finally, we note an important clue to the Riemann dynamics, hidden in the
asymptotics (5.10), (5.11) of the Riemann-Siegel expansion (5.6). It concerns the
implied small exponential expftg (cf. the error in (5.13)). The same exponential
appears in the asymptotics of the gamma functions in (t) (equation (2.3)). Quantum
mechanics suggests this is the \phase factor" corresponding to a periodic orbit with
imaginary action (an \instanton" in physics jargon). If we write
(remembering for the Riemann zeros), the implied period is
@S @S
So, it seems that as well as the real periodic orbits in (2.14), with periods m log p,
there are complex periodic orbits, with periods that are multiples of i.
6. Spectral Speculations. Although we do not know the conjectured Riemann
operator H whose eigenvalues (all real) are the heights tn of the Riemann zeros, the
analogies presented so far suggest a great deal about it. To summarize:
260 M. V. BERRY AND J. P. KEATING
a. H has a classical counterpart (the \Riemann dynamics"), corresponding to a
hamiltonian ow, or a symplectic transformation, in a phase space.
b. The Riemann dynamics is chaotic, that is, unstable and bounded.
c. The Riemann dynamics does not have time-reversal symmetry. In addition,
we note the recent discovery [60, 61] of modied statistics of the low zeros for the
ensemble of Dirichlet L-functions, associated with a symplectic structure.
d. The Riemann dynamics is homogeneously unstable.
e. The classical periodic orbits of the Riemann dynamics have periods that are
independent of \energy" t, and given by multiples of logarithms of prime numbers.
In terms of symbolic dynamics, the Riemann dynamics is peculiar, and resembles
Chinese: each primitive orbit is labelled by its own symbol (the prime p) in contrast
to the usual situation where periodic orbits can be represented as words made of
letters in a nite alphabet.
f. The Maslov phases associated with the orbits are also peculiar: they are all .
The result appears paradoxical in view of the relation between these phases and the
winding numbers of the stable and unstable manifolds associated with periodic orbits
[22], but nds an explanation in a scheme of Connes [62].
g. The Riemann dynamics possesses complex periodic orbits (instantons) whose
periods are multiples of i.
h. For the Riemann operator, leading-order semiclassical mechanics is exact: as
in the case of the Selberg trace formula [21], (1=2+it) is a product over classical
periodic orbits, without corrections.
i. The Riemann dynamics is quasi-one-dimensional. There are two indications of
this. First, the number of zeros less than t increases as t log t; for a D-dimensional
scaling system, with energy parameter (E) proportional to 1=~, the number of energy
levels increases as (E)D. Second, the presence of the factor pm=2 in the counting
function uctuation formula (2.6), rather than the determinant in the more general
Gutzwiller formula (2.9), suggests that there is a single expanding direction and no
contracting direction.
j. The functional equation for resembles the corresponding relation|a consequence
of hermiticity|for the quantum spectral determinant.
We have speculated [6] that the conjectured Riemann operator H might be some
quantization of the following extraordinarily simple classical hamiltonian function
Hcl(X;P) of a single coordinate X and its conjugate momentum P:
Now we outline the reasons for this tentative association of XP with (s).
At the classical level, (6.1) has a hyperbolic point at the origin in the innite-
phase (X;P) plane, and generates the following equations of motion and trajectories:
Thus classical evolution is uniformly unstable, with stretching in X and contraction in
P. Furthermore, the motion has the desired lack of time-reversal symmetry: velocity
cannot be reversed (X_ is tied to X in (6.2)) and so the orbit cannot be retraced.
At the semiclassical level, we can try to estimate the smoothed counting function
hN(E)i of energy levels En generated by the quantum version of (6.1). For this
it is necessary to specify a value of Planck's constant ~. We choose
choices simply rescale the energies. hN(E)i is the area A under the constant-energy
measured in units of the \Planck cell" area 2~ =2, with a
Fig. 7 Phase space for with cutos for semiclassical regularization.
Maslov index correction given by =4, where is the angle turned through along
the orbit in phase space (this correction gives the \1/2" in the quantization of the
harmonic oscillator). We encounter the immediate diculty that A is innite: motion
generated by is unbounded, and so does not give discrete quantum energies.
As will be clear later, closing the phase space to make the motion bounded is a central
unsolved problem. In the interim, a simple (perhaps the simplest) expedient is to
regularize by truncating in X and P as indicated in Figure 7. The result (unaltered
by representing the Planck cell by a rectangle instead of a square) is that hN(E)i is
precisely the asymptotics of the smoothed counting function for the Riemann zeros
(last member of (2.3)), including the term 7/8, with t replaced by the energy E.
At the quantum level, the simplest formally hermitean operator corresponding to
(6.1) is
The formal eigenfunctions, satisfying
are
A
We note the appearance of the power Xs appearing in the Dirichlet series for
(as integers) and the Euler product (as primes), with the symmetrization (6.3)
placing s on the critical line.
It is evident that XP is simply a canonically rotated version of the inverted
harmonic oscillator P2 X2, which in turn is a complexied version of the usual
X2. Some of these connections have been noted before [63,
64, 65, 66, 67]. The rst-order operator XP is the simplest representative of this
class, with the monomials (6.5) avoiding the complications of the parabolic cylinder
eigenfunctions of P2 X2.
262 M. V. BERRY AND J. P. KEATING
To evaluate the corresponding momentum eigenfunction E(P) (Fourier transform
of (6.5)), it is necessary to specify a continuation across simplest
choice, for a reason to be given later, is to make the wavefunction even in X, that is,
to replace X by jXj. Then
A
It follows that, up to factors that can easily be made symmetrical, the position and
momentum eigenfunctions are each other's time-reverses. Thus we nd a physical
interpretation of the function (t) (dened in (2.3)) at the heart of the functional
equation (cf. (5.1)) for (s).
The major problem remaining is to nd boundary conditions that would convert
XP into a well-dened hermitean operator with discrete eigenvalues. This is equivalent
to specifying the way in which parts of the (X;P) plane are connected so as to
compactify the (quantum and classical) motion. Some hints in this direction follow.
Our observations about the complex periodic orbits of the Riemann dynamics
(see the last paragraph of section 5) suggest that X and X should be identied.
The reason is that the complex orbits of X, obtained by replacing t by i in (6.2),
have period 2i, which becomes the desired i (equation (5.27)) on identifying X.
To proceed further, we consider the symmetries of XP, in the hope (so far un-
realized) of superposing solutions of (6.4) acted on by operations in the symmetry
group, with each solution multiplied by the appropriate group character. An obvious
symmetry is dilation: XP is invariant under
From (6.2), K corresponds to evolution after time log K. This implies that the operator
generates dilations, in the same way that the momentum operator generates
translations, and the following series of transformations makes this obvious:
d
d log X
dX K1=2iH
One possibility is to choose the integer dilations and the characters unity.
Then the superposition of solutions (6.5) does contain (1=2iE) as a factor, but there
seems no reason to impose the condition that this must vanish. Moreover, the set of
integer dilations does not form a group (the inverse multiplications 1=m are missing).
Another possibility, closely related to the ideas of [62], is to use not all integers
but the group of integers under multiplication (mod This would have two
advantages. First, it involves only integer dilations. Second, including the characters
(n) of this group (sets of k complex numbers with unit modulus) opens the possibility
of widening the interpretation as eigenvalues of XP, to include the zeros of Dirichlet
L-functions. These are dened by the series
ns
(The special case = 1 corresponds to (s):) It is conjectured that for all these L-functions
the complex zeros lie on the line Res =1=2. On this interpretation, each L-function
corresponds to a dierent self-adjoint extension of XP under identication of
positions X that are related by dilations in the group of integers under multiplication
(mod k). An analogy is with the quantum mechanics of a particle in a periodic
potential (e.g., an electron in a crystal): from the Bloch-Floquet theorem, solutions
of the underlying dierential equation are all periodic up to a phase factor exp(i);
each choice of is a dierent self-adjoint extension, and generates a discrete spectrum.
The analogy is imperfect, because is continuous, whereas the L-functions cannot be
continuously parameterized. A closer analogy is with quantization on a torus phase
space [69], where for topological reasons the permited phases are discrete.
The dynamics (6.2) suggests that the system might be closed by connecting the
asymptotic positions with the asymptotic momenta. Then particles owing out at
would be reinjected at 1. Related to this is a class of dilations
where K is H-dependent (of course these are still symmetries of H). Specically, the
choice K =2=(XP) yields the canonical transformation
corresponding to exchange of X and P (the more familiar does not
leave XP invariant). A short calculation gives the transformed quantum wavefunction
in terms of the untransformed momentum wavefunction as
1=4
We do not know how to convert this \quantum exchange" into an eective boundary
condition, but note its connection with the following intriguing identity, obtained from
the momentum wavefunction formula (6.6) and the functional equation for (s):
where PX =2 (=
If (only) the minus were a plus, this would be a condition generating the Riemann
zeros.
We can sum up these scattered remarks about XP by returning to the properties
listed at the beginning of this section. XP is consistent with point a, part of b (XP
dynamics is unstable but not bounded), and c, d, g, h, i, and j. Concerning point e,
the appearance of times that are logarithms of integers begins to be plausible in view
of the association between dilation and evolution, but primes do not appear in any
obvious way. We have no explanation of property f.
--R
Riemann's zeta function: A model for quantum chaos?
More than one third of the zeros of Riemann's zeta-function are on =1=2
The theory of the Riemann zeta-function
Mathematical Methods of Classical Mechanics
The Hamiltonian method and its application to water waves
Regularity and chaos in classical mechanics
Regular and irregular motion
An Introduction
Chaos in classical and quantum mechanics
Distribution of eigenfrequencies for the wave equation in a nite domain: III.
Chaos on the pseudosphere
Maslov indices in the Gutzwiller trace formula
Spectra of Finite Systems
Introduction to Ergodic Theory
Random Matrices and the Statistical Theory of Energy Levels
A Brownian-motion model for the eigenvalues of a random matrix
Statistical theory of the energy levels of complex systems IV
Statistical Theories of Spectra Fluctuations
in Mathematical and Computational Methods in Nuclear Physics
Semiclassical theory of spectral rigidity
Semiclassical formula for the number variance of the Riemann zeros
The pair correlation of zeros of the zeta function
The semiclassical sum rule and Riemann's zeta function
Correlations in the actions of periodic orbits derived from quantum chaos
Some problems in
On the distribution of spacings between zeros of the zeta function
The 1020th zero of the Riemann zeta function and 175 million of its neigh- bours
Gutzwiller's trace formula and spectral statistics: Beyond the diagonal approximation
On the Berry Conjecture
random matrix theory
Pair correlation of zeros and primes in short intervals
Cambridge University Press
On the Triple Correlation of Zeros of the Zeta Function
Zeros of principal L-functions and random-matrix theory
Random matrix theory and the Riemann zeros I: Three- and four-point correlations
Neue Herleitung und Explizite Restabscha
The Riemann-Siegel formula for the zeta function: High orders and remainders
A rule for quantizing chaos?
Semiclassical quantization of multidimensional systems
A new approximation for (1
An asymptotic representation for the Riemann zeta function on the critical line
An exponentially-improved Gram-type formula for the Riemann zeta function
An asymptotic representation for (1
Calculation of spectral determinants
Zeros of Zeta Functions
symmetry and zeta functions
Formule de trace en geometrie non-commutative et hypothese de Riemann
Eigenstate structures around a hyperbolic point
Phase of the Riemann zeta function and the inverted harmonic oscillator
The phase of the Riemann zeta function
The Riemann hypothesis and the Hamiltonian of a quantum mechanical system
Introduction to Analytic Number Theory
Quantum boundary conditions for torus maps
Asymptotics of the Pair Correlation of Riemann Zeros
--TR
--CTR
T. M. Dunster, Uniform asymptotic approximations for incomplete Riemann Zeta functions, Journal of Computational and Applied Mathematics, v.190 n.1, p.339-353, 1 June 2006 | number theory;spectral asymptotics |
325700 | A framework for combining analysis and verification. | We present a general framework for combining program verification and program analysis. This framework enhances program analysis because it takes advantage of user assertions, and it enhances program verification because assertions can be refined using automatic program analysis. Both enhancements in general produce a better way of reasoning about programs than using verification techniques alone or analysis techniques alone. More importantly, the combination is better than simply running the verification and analysis in isolation and then combining the results at the last step. In other words, our framework explores synergistic interaction between verification and analysis. In this paper, we start with a representation of a program, user assertions, and a given analyzer for the program. The framework we describe induces an algorithm which exploits the assertions and the analyzer to produce a generally more accurate analysis. Further, it has some important features: it is flexible: any number of assertions can be used anywhere; it is open: it can employ an arbitrary analyzer; it is modular: we reason with conditional correctness of assertions; it is incremental: it can be tuned for the accuracy/efficiency tradeoff. | Introduction
abstraction [9] is a successful method of abstract
interpretation. The abstract domain, constructed from
a given finite set of predicates over program variables, is
intuitive and easily, though not necessarily efficiently, computable
within a traversal method of the program's control
flow structure. More recently, the success of predicate abstraction
has been enhanced by a process of discovery of the
[copyright notice will appear here]
abstract domain, generally known as CEGAR or "counterex-
ample guided abstraction refinement".
One major disadvantage of predicate abstraction, as is
true for many other realizations of abstract interpretation, is
that in principle, the process of abstraction is performed at
every step of the traversal phase. Indeed, the survey section
in [19] states that "abstractions are often defined over small
parts of the program", and that "abstractions for model-checking
often over-approximate".
While it is generally easy to optimize somewhat by performing
abstraction here and there (eg: several consecutive
asignments may be compressed and abstraction performed
according one composite assignment, such as in the BLAST
system [11]), there has not been a systematic way of doing
this. Another disadvantage, arising partly because the abstract
description is limited to a fixed number of variables,
is that this ad-hoc method would not be compositional. For
example, [2] required an elaborate extension of predicate abstraction
which essentially considers a second set of variables
(called "symbolic constants"), in order to describe the
behaviour of a function, in the language of predicate abstrac-
tion. This provided a limited form of compositionality.
In this paper, we present a general proof method of program
reasoning based on predicate abstraction in which the
process of abstraction is intermittent, that is, approximation
is performed only at selected program points, if at all. There
is no restriction of when abstraction is performed, even
though termination issues will usually restrict the choices.
The key advantages are that (a) the abstract domain required
to ensure convergence of the algorithm can be minimized,
and (b) the cost of performing abstractions, now being inter-
mittent, is reduced.
For example, to reason that executing x :=
one needs to know that
the final assignment. Also, consider proving for the
following program snippet:
#1# while (i < n) do
A textbook Hoare-style loop invariant for the loop is
2i. Having this proposition in predicate abstraction would,
however, not suffice; one in fact needs to know that
holds in between the two increments to c. Thus in general, a
proper loop invariant is useful only if we could propagate its
information exactly.
A main challenge to having exact propagation is that reasoning
will be required about the strongest-postcondition operator
associated with an arbitrarily long program fragment.
This essentially means dealing with constraints over an unbounded
number of variables describing the states between
the start and end of the program fragment at hand. The advantages
in terms of efficiency, however, are significant: less
predicates needed in the abstract domain, and also, less frequent
execution of the abstraction operation.
An important additional feature of our proof method is
that it is compositional. We represent a proof as a Hoare-style
triple which, for a given program fragment, relates the
input values of the variables to the output values. This is represented
as a formula, and in general, such a formula must
contain auxiliary variables in addition to the program vari-
ables. This is because it is generally impossible to represent
the projection of a formula using a predefined set of vari-
ables, or equivalently, it is not possible to perform quantifier
elimination. Consequently, in order to have unrestricted
composition of such proofs, it is (again) necessary to deal
with an unbounded number of variables.
The latter part of this paper will introduce the technology
of Constraint Logic Programming (CLP) as a basis for efficient
implementation. Briefly, the advantages of CLP are (a)
handles terms containing anonynous primary variables and
constraints on these variables and also an arbitrary number
of auxiliary variables, (b) efficiently represents the projection
of such terms, and (c) handles backtracking.
In summary, we show that our method provides a flexible
combination of abstraction and Hoare-style reasoning with
predicate transformers and loop-invariants, that is composi-
tional, and that its practical implementation is feasible.
1.1 Further Related Work
An important category of tools that use program verification
technology have been developed within the framework of
the Java Modelling Language (JML) project. JML allows
to specify a Java method's pre- and post-conditions, and
class invariants. Examples of such program verification tools
are: Jack [4], ESC/Java2 [7], and Krakatoa [15]. All these
tools employ weakest precondition/strongest postcondition
calculi to generate proof obligations which reflect whether
the given post-conditions and class invariants hold at the
end of a method, whenever the corresponding pre-conditions
are valid at the procedure's entry point. The resulting proof
obligations are subsequently discharged by theorem provers
such as Simplify [7], Coq [3], PVS [17], or HOL light [10].
While these systems perform exact propagation, they depend
on user-provided loop invariants, as opposed to an abstract
domain.
Recently there have emerged systems based on abstract
interpretation, and in particular, on predicate abstraction.
Some examples are BLAST [11], SLAM [1], MAGIC [5],
and Murphi- [8], amongst others. While abstract interpretation
is central, these systems employ a further technique of
automatically determining the abstract domain needed for
a given assertion. This technique is called CEGAR, see eg.
the description in [6], based on an iteratively refining the abstract
domain from the failure of the abstract domain in the
previous iteration. These systems do not perform exact propagation
in a systematic way.
2. Preliminaries
Apart from a program counter k, whose values are program
points, let there be n system variables -
domains respectively. In this paper, we shall
use just two example domains, that of integers, and that of
integer arrays. We assume the number of system variables is
larger than the number of variables required by any program
fragment or procedure.
DEFINITION 1 (States and Transitions). A system state (or
simply state) is of the form (k,d 1 , - , d n ) where pc is a
program point and d i # D i , 1 # i # n, are values for the
system variables. A transition is a pair of states.
In what follows, we define a language of first-order for-
mulas. Let V denote an infinite set of variables, each of
which has a type in D 1 , - , D n , let S denote a set of func-
tors, and P denote a set of constraint symbols. A term
is either a constant (0-ary functor) in S or of the form
and each t i is a term,
primitive constraint is of the form f(t 1 , -
where f is a m-ary constraint symbol and each t i is a term,
A constraint is constructed from primitive constraints
using logical connectives in the usual manner. Where Y is a
constraint, we write Y( -
X) to denote that Y possibly refers to
variables in -
X , and we write -
X) to denote the existential
closure of Y( -
away from -
X .
An substitution is a mapping which simultaneously replaces
each variable in a term or constraint into some ex-
pression. Where e is a term or constraint, we write eq to
denote the result of applying q to e. A special kind of substitution
is a renaming, which maps each variable in a given
sequence, say -
into the corresponding variable in another
given sequence, say -
Y . We write [ -
Y ] to denote such a
mapping.
Another special kind of substitution is a grounding of an
this maps each variable in the expression into a
value in its respective domain. Thus the effect of applying a
grounding substitution q to an expression e is to obtain a set
eq of its ground instances under q. We write to denote
the set of all possible groundings of e.
3. Constraint Transition Systems
A key concept is that a program fragment p operates on a
sequence of anonymous variables, each corresponding to a
system variable at various points in the computation of p.
In particular, we consider two sequences -
n of anonymous variables to denote the system
values before executing p and at the "target" point(s) of p,
respectively. Typically, but not always, the target point is the
terminal point of p. Our proof obligation or assertion is then
of the form
where Y and Y 1 are constraints over the said variables, and
possibly including new variables. Like the Hoare-triple, this
states that if p is executed in a state satisfying Y, then all
states at the target points (if any) satisfy Y 1 . Note that, unlike
the Hoare-triple, p may be nonterminating and Y 1 may refer
to the states of a point that is reached infinitely often. We
will formalize all this below.
For example, let there be just one system variable x, let
p be <0> x let the target point be <1>.
meaning p is the successor
function on x. Similarly, if p were the (perpetual) program
<0> while (true) x
if <1> were the target point, then {true}p{x
that is, any state (1,x) at point <1> satisfies #z(x
This shows, amongst other things, that the parity of x always
remains unchanged.
Our proof method accomodates concurrent programs of a
fixed number of processes. Where we have n processes, we
shall use as a program point, a sequence of n program points
so that the i th program point is one which comes from the i th
process,
We next represent the program fragment p as a transition
system which can be executed symbolically. The following
definition serves two main purposes. First, it is a high
level representation of the operational semantics of p, and in
fact, it represents the exact trace semantics of p. Second, it
is an executable specification against which an assertion can
be checked.
DEFINITION 2 (Constraint Transition System). A constraint
transition of p is a formula
where k and k 1 are variables over program points, each of
x and -
x 1 is a sequence of variables representing a system
even(5,
even(5,
Figure
1. Even counts
Process 1:
while (true) do
Process 2:
while (true) do
Figure
2. Two Process Bakery
Figure
3. CTS of Two Process Bakery
state, and Y is a constraint over -
x and -
possibly some
additional auxiliary variables.
A constraint transition system (CTS) of p is a finite set of
constraint transitions of p.
Consider for example the program in Section 1; call it
even.
Figure
1 contains a CTS for even.
Consider another example: the Bakery algorithm with
two processes in Figure 2. A CTS for this program, call it
bak, is given in Figure 3. Note that we use the first and
second arguments of the term bub to denote the program
points of the first and second process respectively.
Clearly the variables in a constraint transition may be re-named
freely because their scope is local to the transition.
We thus say that a constraint transition is a variant of another
if one is identical to the other when a renaming subsitution is
performed. Further, we may simplify a constraint transition
by renaming any one of its variables x by an expression y
provided that all groundings of the constraint tran-
sition. For example, we may simply state the last constraint
transition in Figure 3 into
by replacing the variable y 1 in the original transition with 0.
The above formulation of program transitions is familiar
in the literature for the purpose of defining a set of transi-
tions. What is new, however, is how we use a CTS to define
a symbolic transition sequences, and thereon, the notion of a
proof.
By similarity with logic programming, we use the term
goal to deonote a literal that can be subjected to an unfolding
process in order to infer a logical consequence.
DEFINITION 3 (Goal). A query or goal of a CTS is of the
where k is a program point, Y is a sequence of variables
over system states, and Y is a constraint over some or all
of the variables -
x, and possibly some additional variables.
The variables -
x are called the primary variables of this goal,
while any additional variable in Y is called an auxiliary
variable of the goal.
Thus a goal is just like the conclusion of a constraint
transition. We say the goal is a start goal if k is the start
program point. Similarly, a goal is a target goal is k is the
target program point. Running a start goal is tantamount to
asking the question: which values of -
x which satisfy -
will lead to a goal at the target point(s)? The idea is that we
successively reduce one goal to another until the resulting
goal is at a target point, and then inspect the results.
Next we define it means for a CTS to prove a goal.
DEFINITION 4 (Proof Step, Sequence and Tree). Let there
be a CTS for p, and let
x),Y be a goal for this. A
proof step from G is obtained via a variant p(k, -
of a transition in the CTS in which all the variables are fresh.
The result is a goal G # of the form
y, Y 1
providing the constraints Y, -
y, Y 1 are satisfiable.
A proof sequence is a finite or infinite sequence of proof
steps. A proof tree is defined from proof sequences in the
obvious way. A tree is complete if every internal node representing
a goal G is succeeded by nodes representing every
goal obtainable in a proof step from G .
Figure
5. Proof Tree of Even Counts Program
Consider again the CTS in Figure 1, and we wish to prove
There is in fact only one proof sequence
from the start goal
or equivalently, even(0, i, 1,0). This proof sequence is shown
in
Figure
5, and note that the counter,represented in the last
goal by the variable c 2 , has the value 2.
Hereafter we shall consider that a program and its CTS
are synonymous. Given a program p, we say that
x are the
start variables of p to denote that -
x are the variables in the
first constraint transition of p.
DEFINITION 5 (Assertion). Let p be a program with start
variables -
x, and let Y be a constraint. Let -
x t denotes a sequence
of variables representing system states not appearing
in p or Y. (These represent the target values of the system
An assertion for p wrt to -
x t is of the form
In particular, when k is the start program point, we may
abberviate the assertion using the notation:
It is intuitively clear what it means for an assertion to
hold. That is, execution from every instance q of p(k, -
cannot lead to a target state where the property Y 1 ( -
violated.
In the example above, we could prove the assertion
it is understood that the final
variable c t corresponds to the start variable c. Note that the
last occurrence of n in the assertion means that we are comparing
c t with the initial and not final value of n (though in
this example, the two are in fact the same).
We now state the essential property of proof sequences:
THEOREM 1. Let a CTS for p have the start point k and target
x and -
x 1 each be sequences of variables
Figure
4. Proof Tree of 2-Process Bakery Algorithm (Partially Shown)
over system states. The assertion {Y( -
holds if for any goal of the form p(k t , -
appearing
in a proof sequence from the goal p(k, -
x), the following
holds:
The above theorem provides the basis of a search method,
and what remains is to provide a means to ensure termination
of the search. Toward this end, we next define the concepts
of subsumption and coinduction and which allow the
(successful) termination of proof sequences. However, these
are generally insufficient. In the next section, we present our
version of abstraction whose purpose is to transform a proof
sequence so that it is applicable to the termination criteria of
subsumption and coinduction.
3.1 Subsumption
Consider a finite and complete proof tree from some start
goal. A goal G in the tree is subsumed if there is a different
path in the tree containing a goal G # such that [[G
The principle here is simply memoization: one may terminate
the expansion of a proof sequence while constructing
a proof tree when encountering a subsumed goal.
3.2 Coinduction
The principle here is that, within one proof sequence, the
proof obligation associated with the final goal may assume
that the proof obligation of an ancestor goal has already
been met. This can be formally explained as a principle of
coinduction (see eg: Appendix B of [16]). Importantly, this
simple form of coinduction does not require a base case nor
a well-founded ordering.
We shall simply demonstrate this principle by example.
Suppose we had the transition p(0,x) # p(0,x #
and we wished to prove the assertion p(0,x) |= even(x t
-x),
that is, the difference between x and its final value is even.
Consider the derivation step:
We may use, in the latter goal, the fact that the earlier goal
satisfies the assertion. That is, we may reduce the obligaton
of the latter goal to
It is now a simple matter of inferring whether this formula
holds.
In general practice, the application of coinduction testing
is largely equivalent to testing if one goal is simply an instance
of another.
3.3 Compostionality
It is intuitively clear that since our proof obligation relates
the start and final values of a program, or equivalently, it
obeys the "assume-guarantee" paradigm [18], that the proof
method is sequentially compositional. We thus omit a formal
treatment of CTS where programs directly invoke other
programs. Instead, in the next section, we provide a simple
example.
4. Abstraction
In the literature on predicate abstraction, the abstract description
is a specialized data structure (monomial?), and
the abstraction operation serves to propagate such a structure
though a small program fragment (a contiguous group
of assignments, or a test), and then obtaining another struc-
ture. The strength of this method is in the simplicity of using
a finite set of predicates over the fixed number of program
variables as a basis for the abstract description.
We choose to follow this method. However, our abstract
description shall not be a distinguished data structure. In
our abstract description of a goal is itself a goal.
DEFINITION 6 (Abstraction). An abstraction A is applied
to a goal. It is specified by a program point pc(A), a sequence
of variables var(A) corresponding to a subset of
the system variables, and finally, a finite set of constraints
pred(A) over var(A), called the "predicates" of A .
Let A be an abstraction and G be a goal p(k, -
x),Y where
x denote the subsequence of -
x corresponding
to the system variables var(A). Let -
x denote the remaining
subsequence of -
x. Without losing generality, we assume
that -
x 1 is an initial subsequence of -
x, that is, -
x.
Then the abstraction A(G) of G by A is:
Z is a sequence of fresh variables renaming -
Y 2 is the finite set of constraints
For example, let A be such that
and That is, the first variable
is to be abstracted into a negative or a nonnegative value.
Let G be p(0, [x 1 , x 2 , x 3 ]), x 1. Then the abstraction
A(G) is a goal of the form p(0, [Z,x 2 , x 3 ]), x
which can be simplified into p(0, [Z,x 2 , x 3 ]), x
Note that the orginal goal had ground instances
p(0, [1, 1,n]) for all n, while the abstracted goal has the instances
p(0, [m,1,n]) for all n and all nonnegative m. Note
that the second variable x 2 has not been abstracted even
though it is tightly constrained to the first variable x 1 . Note
further that the value of x 3 is unchanged, that is, the abstraction
would allow any constraint on x 3 , had the example goal
contained such a constraint, to be propagated.
LEMMA 1. Let A be an abstraction and G a goal.
The critical point is that the abstraction of a goal has the
same format as the goal itself. Thus an abstract goal has the
expressive power of a regular goal, while yet containing a
notion of abstraction that is sufficient to produce a finite-state
effect. Once again, this is facilitated by the ability to
reason about an unbounded number of variables.
Consider the "Bubble" program and its CTS in Figures
7 and 8, which is a simplified skeleton of the bubble sort
algorithm (without arrays). Consider the subprogram corresponding
to start point 2 and whose target point is 6, that is,
we are considering the inner loop. Further suppose that the
following assertion had already been proven:
bub(2, i, j, t, n) |=
that is, the subprogram increments t by n - preserving
both i and n, but not j. Consider now a proof sequence
for the goal bub(0, i, j, t, n),n # 0, where we want to
prove that at program point #8#,
n)/2. The proof
tree is depicted in Figure 6. The proof shows a combination
of the use of intermittent abstraction and compositional
proof:
. At point (A), we abstract the goal bub(2,
using the predicates i <
i)/2. Call this abstraction A . Here the set
of variables is hence both the variables
correspond respectively to system variables
#1# while (i < n-1) do
#3# while (j < n-i-1) do
Figure
7. Program "Bubble"
bub(0, i, j, t, n) # bub(1,
bub(1, i, j, t, n) # bub(8, i, j, t, n), i # n-1.
bub(1, i, j, t, n) # bub(2, i, j, t, n), i < n-1.
bub(2, i, j, t, n) # bub(3,
bub(3, i, j, t, n) # bub(6, i, j, t, n), j # n- i -1.
bub(3, i, j, t, n) # bub(4, i, j, t, n), j < n- i -1.
bub(4, i, j, t, n) # bub(5,
bub(5, i, j, t, n) # bub(6, i, j, t, n), j # n- i -1.
bub(5, i, j, t, n) # bub(4, i, j, t, n), j < n- i -1.
bub(6, i, j, t, n) # bub(7,
bub(7, i, j, t, n) # bub(8, i, j, t, n), i # n-1.
bub(7, i, j, t, n) # bub(2, i, j, t, n), i < n-1.
Figure
8. CTS of "Bubble"
and t are renamed to fresh variables i 2 , and t 2 . Mean-
while, the variables j and n retain their original values.
. After performing the above abstraction, we reuse the
proof of the inner loop above. Here we immediately move
to program point #6#, incrementing t with
updating j to an unknown value. However, i and n retain
their original values at #2#.
. As the result of the intermittent abstraction above, we
obtain a coinductive proof at (B).
5. The Whole Algorithm
We now summarize our proof method for an assertion
Suppose the start program point of p is k and the start
variables of p are -
x. Then consider the start goal p(k, -
and incrementally build a search tree. For each path in the
tree constructed so far leading to a goal G :
. if G is either subsumed or is coinductive, then consider
this path closed, ie: not to be expanded further;
. if G is a goal on which an abstraction A is defined,
replace G by A(G);
. if G is a target goal, and if the constraints on the primary
variables -
x 1 in G do not satisfy Yq, where q renames the
target variables in Y into -
Coinduction using (A)
Satisfies
Satisfies
Proof composition
Intermittent abstraction
bub(0, i, j, t, n),n # 0
Figure
6. Compositional Proof
THEOREM 2. If the above algorithm, applied to the assertion
then the asertion holds.
6. CLP Technology
It is almost immediate that CTS is implementable in CLP.
Given a CTS for p, we build a CLP program in the following
way: (a) for every transition of the form (k, -
we use the CLP rule the clause p(k, -
ing that Y is in the constraint domain of the CLP implementation
at hand); (b) for every terminal program point k, we
use the CLP fact p(k, , . , , ), where the number of anonymous
variables is the same as the number of variables in -
x.
We see later that the key implementation challenge for
a CLP system is the incremental satisfiability problem.
Roughly stated, this is the problem of successively determining
that a monotonically increasing sequence of constraints
(interpreted as a conjunction) is satisfiable.
6.1 Exact Propagation is "CLP-Hard"
Here we informally demonstrate that the incremental satisfiability
problem is reducible to the problem of analyzing
a straight line path in a program. We will consider here
constraints in the form of linear diophantine equations, i.e.,
multivariate polynomials over the integers. Without loss of
generality, we assume each constraint is written in the form
is an integer.
Suppose we already have a sequence of constraints
corresponding path in the program's control
flow.
Suppose we add a new constraint Y
Then, if one of these variables, say Y , is new, we add the
assignment y := x - z where y is a new variable created to
correspond to y. The remaining variables x and z are each
either new, or are the corresponding variable to x and Z. If
however all of x,Y and Z are not new, then add the statement
if are the program
variables corresponding to x, y, z respectively. Hereafter we
pursue the then branch of this if statement.
Similarly, suppose the new constraint were of the form
y correspond to Y , and y is possibly new. Again,
if x is new, we simply add the assignment x := n # y where
x is newly created to correspond to x. Otherwise, add the
statement if . to the path, and again, we
now pursue the then branch of this if statement.
Clearly an exact analysis of the path we have constructed
leading to a successful traversal required, incrementally, the
solving of the constraint sequence Y 0 , - , Y n .
6.2 Key Elements of CLP Systems
A CLP system attempts to find answers to an initial goal G
by searching for valid substitutions of its variables. Depth-first
search is used. Each path in the search tree in fact
involves the solving of an incremental satisfiability problem.
Along the way, unsatisfiability of the constraints at hand
would entail backtracking.
The key issue in CLP is the incremental satisfiability
problem, as mentioned above. A standard approach is as
follows. Given that the sequence of constraints Y 0 , . , Y i
has been determined to be satisfiable, represent this fact
in a solved form. Essentially, this means that when a new
constraint Y i+1 is encountered, the solved form is efficiently
combinable with Y i+1 in order to determine the satisfiability
of the new conjunction of constraints.
This method essentially requires a representation of the
projection of a set of constraints onto certain variables. Con-
sider, for example, the set x
Assuming that the new constraint would
only involve the variable x i (and this happens vastly of-
ten), we desire a representation of x projection
problem is well studied in CLP systems [13]. In the system
CLP(R ) [14] for example, various adaptations of the
Fourier-Motzkin algorithm were implemented for projection
in Herbrand and linear arithmetic constraints.
We finally mention another important optimization in
CLP: tail recursion. This technique uses the same space in
the procedure call stack for recursive calls. Amongst other
bebefits, this technique allows for a potentially unbounded
number of recursive calls. Tail recursion is particurly relevant
in our context because the recursive calls arising from
the CTS of programs are often tail-recursive.
The CLP(R ) system that we use to implement our prototype
has been engineered to handle constraints and auxiliary
variables efficiently using the above techniques.
7. Experiments
We performed two kinds of experiments: the first set performs
exact propagation. We then look at comparable abstract
runs in the BLAST system, and (exact) runs in the
system. These results are presented in Section
7.1.
In the second set of experiments, presented in Section 7.2,
we compare intermittent predicate abstraction with normal
predicate abstraction, again against the BLAST system.
We used a Pentium 4 2.8 GHz system with 512 MBRAM
running GNU/Linux 2.4.22.
7.1 Exact Runs
We start with an experiment which shows that concrete execution
can potentially be less costly than abstract execution,
we simply compare the timing of concrete execution using
our CLP-based implementation and a predicate abstraction-based
model checker. We also run a simple looping program,
whose C code is shown in Figure 9. We first have BLAST
generate all the 100 predicates it requires. We then re-run
BLAST by providing these predicates. BLAST took 22.06
seconds to explore the state space. On the same machine, and
without any abstraction, our verification engine took only
seconds. For comparison, SPIN model checker [12] executes
the same program written in PROMELA in less than
seconds.
Now consider the synthetic program consisting of an initial
assignment x := 0 followed by 1000 increments to x,
with the objective of proving that 1000 at the end. Consider
also another version where the program contains only
a single loop which increments is counter x 1000 times. We
input these two programs to our program verifier, without
using abstraction, and to ESC/Java 2 as well. The results are
shown in Table 1. For both our verifier and ESC/Java 2 we
run both with x initialized to 0 and not initialized, hopefully
forcing symbolic execution.
Table
1 shows that our verifier runs faster for the non-looping
version. However, there is a noticeable slowdown in
int main()
{ int i=0, j, x=0;
while (i<7) {
while (j<7) { x++; j++; }
{ ERROR: }
Figure
9. Program with Loop
Time (in Seconds)
CLP with Tabling ESC/Java 2
Non-Looping 2.45 2.47 9.89 9.68
Looping 22.05 21.95 1.00 1.00
Table
1. Timing Comparison with ESC/Java 2
the looping version for our implementation. This is caused
by the fact that in our implementation of coinductive tabling,
subsumption check is done based on similarity of program
point. Therefore, when a program point inside a loop is visited
for the i-th time, there are i - 1 subsumption checks to
be performed. This results in a total of about 500,000 subsumption
checks for the looping program. In comparison,
the non-looping version requires only 1,000 subsumption
checks. However, our implementation is currently at a prototype
stage and our tabling mechanism is not implemented in
the most efficient way. For the looping version, ESC/Java 2
employs a weakest precondition propagation calculus; since
the program is very small, with a straightforward invariant
(just the loop condition), the computation is very fast. Table
also shows that there is almost no difference between
having x initialized to 0 or not.
7.2 Experiments Using Abstraction
Next we show an example that demonstrates that the intermittent
approach requires fewer predicates. Let us consider
a second looping program written in C, shown in Figure 10.
The program's postcondition can be proven by providing an
invariant x=i # i<50 exactly before the first statement of
the loop body of the outer while loop. We specify as an abstraction
domain the following predicates x=i, i<50, and respectively
their negations x#=i, i#50 for that program point
to our verifier. Using this information, the proof process finishes
in less than 0.01 seconds. If we do not provide an abstract
domain, the verification process finishes in 20.34 sec-
onds. Here intermittent predicate abstraction requires fewer
predicates: We also run the same program with BLAST and
provide the predicates x=i and i<50 (BLAST would auto-
int main()
{ int i=0, j, x=0;
while (i<50) {
while (j<10) { x++; j++; }
while (x>i) { x-; }
{ ERROR: }
Figure
10. Second Program with Loop
while (true) do
Figure
11. Bakery Algorithm Peudocode for Process i
matically also consider their negations). BLAST finishes in
1.33 seconds, and in addition, it also produces 23 other predicates
through refinements. Running it again with all these
predicates given, BLAST finishes in 0.28 seconds.
Further, we also tried our proof method on a version
of bakery mutual exclusion algorithm. We need abstraction
since the bakery algorithm is an infinite-state program. The
pseudocode for process i is shown in Figure 11. Here we
would like to verify mutual exclusion, that is, no two processes
are in the critical section (program point #2#) at the
same time. Our version of bakery algorithm is a concurrent
program with asynchronous composition of processes. It can
be encoded as a sequential program with nondeterministic
choice.
We first encode the algorithm for 2, 3 and 4 processes
in BLAST. Nondeterministic choice can be implemented in
BLAST using the special variable BLAST NONDET which
has a nondeterministic value. We show the BLAST code for
2-process bakery algorithm in Figure 12. Within the code,
we use program point annotations #pc# which should be considered
as comments. Notice that the program points of the
concurrent version are encoded using the integer variables
pc1 and pc2.
Further, we translate the BLAST sequential versions of
the algorithm for 2, 3 and 4 processes into the CTS version
shown in Figure 13 and also its corresponding CLP code, as
an input to our prototype verifier.
In our experiments, we attempt to verify mutual exclusion
property, that is, no two processes can be in the critical section
at the same time. Here we perform 3 sets of runs, each
consisting of runs with 2, 3 and 4 processes. In all 3 sets,
we use a basic set of predicates: x i =0, x i #0, pc i =0, pc i =1,
int main()
{
#0# int pc1=0, pc2=0;
unsigned int x1=0, x2=0;
#1# while (1) {
#2# if (pc1==1 || pc2==1) {
#3# /* Abstraction point 1 */; }
#4# if (pc1==0 || pc2==0) {
#5# /* Abstraction point 2 */;
else if (pc1==2 && pc2==2) {#6# ERROR: }
#7# if ( BLAST NONDET) {
#8# if (pc1==0) {
else if (pc1==1 &&
{
else if (pc1==2) {
} else {
#12# if (pc2==0) {
else if (pc2==1 &&
{
else if (pc2==2) {
Figure
12. Sequential 2-Process Bakery
and N the number of processes,
and also their negations.
. Set 1: Use of predicate abstraction at every state with
full predicate set. Here using our prototype system we
perform ordinary predicate abstraction where we abstract
at every state encountered during search. Here, in addition
to the basic predicates, we also require the predicates
shown in Table 2 (and their negations) to avoid producing
spurious counterexample.
. Set 2: Intermittent predicate abstraction with full
predicate set. In the second set we use intermittent abstraction
technique on our prototype implementation. We
abstract only when for some process i, pc i =1 holds. In
Figure
12, this abstraction point is marked with the comment
"Abstraction point 1." The set of predicates that we
use here is the same as the predicates that we use in the
first experiment above, otherwise spurious counterexample
will be generated.
. Set 3: Intermittent predicate abstraction with reduced
predicate set. For the third set we also use intermittent
abstraction technique on our tabled CLP system.
Here we only abstract whenever there are N-1 processes
at program point 0, which in the 2-process sequential version
is the condition where either pc1=0 or pc2=0. This is
bak(3, pc1, pc2,x1,x2) # bak(4, pc1, pc2,x1,x2).
bak(5, pc1, pc2,x1,x2) # bak(7, pc1, pc2,x1,x2).
2.
bak(6, pc1, pc2,x1,x2) # bak(7, pc1, pc2,x1,x2).
bak(7, pc1, pc2,x1,x2) # bak(8, pc1, pc2,x1,x2).
bak(7, pc1, pc2,x1,x2) # bak(12, pc1, pc2,x1,x2).
2.
2.
2.
2.
Figure
13. CTS of Sequential 2-Process Bakery
Bakery-2 x1<x2
Bakery-3 x1<x2, x1<x3, x2<x3
Bakery-4 x1<x2, x1<x3, x1<x4
x2<x3, x2<x4, x3<x4
Table
2. Additional Predicates
Time (in Seconds)
CLP with Tabling BLAST
Bakery-3 0.83 0.14 0.09 2.38
Bakery-4 131.11 8.85 5.02 78.47
Table
3. Timing Comparison with BLAST
marked with the comment "Astraction point 2" in Figure
12.
For each bakery algorithm with N processes, here we
only need the basic predicates and their negations without
the additional predicates shown in Table 2.
We have also compared our results with BLAST. We supplied
the same set of predicates that we used in the first and
second sets to BLAST. Again, in BLAST we do not have to
specify their negations explicitly. Interestingly, for 4-process
bakery algortihm BLAST requires even more predicates to
avoid refinement, which are x1=x3+1, x2=x3+1, x1=x2+1,
1#x4, x1#x3, x2#x3 and x1#x2. We suspect this is due to
the fact that precision in predicate abstraction-based state-space
traversal depends on the power of the underlying theorem
prover. We have BLAST generate these additional predicates
it needs in a pre-run, and then run BLAST using them.
Here since we do not run BLAST with refinement, lazy abstraction
technique [11] has no effect, and BLAST uses all
the supplied predicates to represent any abstract state.
For these problems, using our intermittent abstraction
with CLP tabling is also markedly faster than both full predicate
abstraction with CLP and BLAST. We show our timing
results in Table 3 (smallest recorded time of 3 runs each).
The first set and BLAST both run with abstraction at
every visited state. The timing difference between them and
second and third sets shows that performing abstraction at
every visited state is expensive. The third set shows further
gain over the second when we understand some intricacies
of the system.
Acknowledgement
We thank Ranjit Jhala for his help with BLAST.
--R
Automatic predicate abstraction of C programs.
Polymorphic predicate abstraction.
The Coq proof assistant reference manual-version v6
Java applet correctness: A developer-oriented approach
Modular verification of software components in C.
ESC/Java2: Uniting ESC/Java and JML.
Experience with predicate abstraction.
Construction of abstract state graphs of infinite systems with PVS.
HOL light: A tutorial introduction.
Lazy ab- straction
The SPIN Model Checker: Primer and Reference Manual.
Projecting CLP(R
The CLP(R
The KRAKATOA tool for certification of JAVA/JAVACARD programs annotated in JML.
Principles of Program Analysis.
PVS: A prototype verification system.
A proof technique for rely/guarantee properties.
Model checking programs.
--TR
Constraint logic programming
Methods and logics for proving programs
based program analysis
Modern compiler implementation in ML
Simplification by Cooperating Decision Procedures
Abstract interpretation
Systematic design of program analysis frameworks
A flexible approach to interprocedural data flow analysis and programs with recursive data structures
On Proving Safety Properties by Integrating Static Analysis, Theorem Proving and Abstraction
Program Analysis Using Mixed Term and Set Constraints
Experiments in Theorem Proving and Model Checking for Protocol Verification
Powerful Techniques for the Automatic Generation of Invariants
Verifying Invariants Using theorem Proving
PVS | program analysis;program verification;abstract interpretation |
325874 | Optimizing Queries with Object Updates. | Object-oriented databases (OODBs) provide powerful data abstractions and modeling facilities but they usually lack a suitable framework for query processing and optimization. Even though there is an increasing number of recent proposals on OODB query optimization, only few of them are actually focused on query optimization in the presence of object identity and destructive updates, features often supported by most realistic OODB languages. This paper presents a formal framework for optimizing object-oriented queries in the presence of side effects. These queries may contain object updates at any place and in any form. We present a language extension to the monoid comprehension calculus to express these object-oriented features and we give a formal meaning to these extensions. Our method is based on denotational semantics, which is often used to give a formal meaning to imperative programming languages. The semantics of our language extensions is expressed in terms of our monoid calculus, without the need of any fundamental change to our basic framework. Our method not only maintains referential transparency, which allows us to do meaningful query optimization, but it is also practical for optimizing OODB queries since it allows the same optimization techniques applied to regular queries to be used with minimal changes for OODB queries with updates. | Introduction
One of the key factors for OODB systems to successfully compete with relational systems as well as to
meet the performance requirements of many non-traditional applications is the development of an effective
query optimizer. Even though there are many aspects to the OODB query optimization problem that can
benefit from the, already successful, relational query optimization research, there many key features of
OODB languages that make this problem unique and hard to solve. These features include object identity,
methods, encapsulation, user-defined type constructors, large multimedia objects, multiple collection
types, arbitrary nesting of collections, and nesting of query expressions.
There is an increasing number of recent proposals on OODB query optimization. Some of them
are focused on handling nested collections [OW92, Col89], others on converting path expressions into
joins [KM90, CD92], others on unnesting nested queries [CM95b, CM95a], while others are focused on
handling encapsulation and methods [DGK + 91]. However, there very few proposals on query optimization
in the presence of object identity and destructive updates, features often supported by most realistic
OODB languages.
In earlier work [FM98, FM95b, FM95a], we proposed an effective framework with a solid theoretical
basis for optimizing OODB query languages. Our calculus, called the monoid comprehension calculus, has
already been shown to capture most features of ODMG OQL [Cat94] and is a good basis for expressing
various optimization algorithms concisely, including query unnesting [Feg98] and translation of path
expressions into joins [Feg97]. In this paper, we extend our framework to handle object identity and
object updates.
1.1 Object Complicates Query Optimization
Object-oriented programming is based on side-effects, that is, on the modification of the object store.
Even though modern OODBs provide declarative query languages for associative access of data, queries
in those languages are allowed to invoke any method, including those that perform side effects. Object
creation itself, which is very common in OODB queries, is a side effect since it inserts a new object in a
class extent. Consider for example the following OQL query:
select Person(e.name,e.address) from e in Employees
which creates a new person from each employee. Even though this query seems to be free of side effects
at a first glance, it is not: it modifies the extent of the class Person by inserting a new person. If this
query were a part of another query and this other query were scanning the extent of the class Person,
this extent would have to be modified accordingly before it is used in the outer query. Therefore, the
semantics of the above query must reflect the fact that the Person extent is modified during the execution
of the query. Failure to do so may result to incorrect optimizations, which may lead to invalid execution
plans.
The problem of assigning semantics to object-oriented queries becomes even worse if we allow object
state modifications in arbitrary places in a query, like most OODB languages do. For example, the OQL
query
select (e.salary := e.salary*1.08)
from e in Employees
where e.salary?50000
gives an 8% raise to all employees earning over $50000. The semantics of this query should reflect the
fact that a salary is modified at each iteration.
The situation where queries are mixed freely with updates occurs more frequently in OODB languages,
such as O++ [AG89], that support set iteration embedded in a computational complete programming
language. Even though these languages are beginning to disappear in favor to more declarative languages,
such as OQL, there is a surge of interest to provide more computational power to existing declarative
query languages without sacrificing performance. Consider for example the following O++ query (taken
from [LD92]):
for (D of Divisions)
for (E of Employees)
suchthat (E!division==D) f
totpay
These types of queries allow any kind of C++ code inside a for-loop, including code that modifies the
database. Earlier research on query optimization [LD92] has shown that queries in this form are very
hard to optimize.
Another problem to consider when sets and bags are combined with side effects is that the results
may be unpredictable due to commutativity. For example, in the following O++ query
for (e of Employees)
the value of x at the end of the execution of this program would depend on the way Employees are scanned.
To understand the extent of this problem, consider the function f(n) which contains the assignment x:=n in
its body and returns n. Is the value of x, after the execution of ff(1); f(2)g, 1 or 2? (Since fx;
That is, is ff(1); f(2)g equal to ff(2); f(1)g?
Given that side effects may appear at any place in a program, proving equivalences between OODB
expressions becomes very hard. This makes the task of expressing and verifying optimization rules
difficult to accomplish. For example, the well known transformation x ./ y ! y ./ x, for any terms x
and y, is not valid any more since x and y may be queries that perform side effects and the order of
execution of the side effects would be changed if this transformation were applied, thus changing the
semantics of the program. One way to patch this error is to attach a guard to the above transformation
rule to prevent its execution when both terms x and y contain side effects. Unfortunately, this approach
is too conservative and it may miss some optimizations (e.g., when the side effects of x and y do not
interfere to each other). Furthermore, there is a more fundamental problem with algebraic operations
with side effects. For example, x ./ y cannot appear in a valid translation of a declarative query, since a
declarative query does not define an execution order.
Consequently, when optimizing a realistic OODB query language, we need to address the problem of
object identity properly and handle the implicit or explicit side effects due to the use of object identity.
It is also highly desirable to use the existing optimization techniques with minimal changes if possible.
To do so, it is necessary to capture and handle object identity in the same framework as that for regular
queries with no side effects. Unfortunately, such extensions are very difficult to incorporate to an existing
optimization framework. To understand the level of difficulty, consider the following equality predicate:
Person("Smith","Park
This predicate must be evaluated to false since the left object has a different
identity than the right one. On the other hand, given a function g(x) that computes the predicate x=x,
the function call g(Person("Smith","Park Av")) should return true. But if we unfold the call to g, we get
the previous false expression. Consequently, substituting the body of a function definition for a function
call is not a valid transformation any more.
Our goal is to give a formal meaning to OODB queries with side effects, and more importantly, to
provide an equational theory that allows us to do meaningful query optimization. It is highly desirable for
this theory to be seamlessly incorporated into the monoid comprehension calculus, possibly by discovering
a new monoid that captures the meaning of object identity. Another goal is, whenever there are no object
updates in a query, we would like this query be treated in the same way as it is currently treated by our
basic optimizer without the object extensions.
1.2 Our Approach
This paper presents a framework that incorporates impure features, namely object identity, into the
monoid comprehension calculus. According to our earlier discussion, it is important to give semantics to
such extensions to preserve referential transparency. We have referential transparency when we are able
to substitute equal subexpressions in a context of a larger expression to give equal results [Rea89]. If a
query language lacks this property, then the transformation rules of a query optimizer would depend on
the context of the expression on which they apply.
Researchers in programming languages often use formal methods, such as denotational semantics, to
solve such problems. In the denotational semantics approach, the impure features of a language can be
captured by passing the state (here the object store) through all operations in a program. If a piece
of a program does not update the state, then the state is propagated as is; otherwise it is modified
to reflect the updates. When we say "the state is modified", we mean that a new copy of the state
is created. This approach may become quite inefficient: for each destructive update, no matter how
small it is, a new object store (i.e., an entire database) must be created. Obviously, this technique is
unacceptable for most database applications. There is a solution to this problem. We can allow the state
to be manipulated by a small number of primitives that not only preserve referential transparency, but
have an efficient implementation as well. More specifically, even though these primitives are defined in
a purely functional way, their implementations perform destructive updates to the state. That way we
derive efficient programs and, more importantly, we maintain referential transparency, which allows us
to do meaningful query optimization.
But there is a catch here: this solution works only if the state is single-threaded [Sch85]. Roughly
speaking, a state is single-threaded through a program if this program does not undo any state modification
at any point, that is, if there is a strict sequencing of all the state updates in the program. In
that case, the state can be replaced by access rights to a single global variable and the state operations
can be made to cause side effects while preserving operational properties. The following is an example
of a non single-threaded program:
assume x:=2 in y := x+1;
The statement assume S1 in S2 executes the statement S1 locally. That is, the state modifications in S1
are used in S2 exclusively and then are discarded. Thus, the value of x and y after the completion of this
program are 1 and 3 respectively (since the binding x:=2 is discarded after it is used in y:=x+1). This
statement requires a local copy of the state during execution (probably in the form of a stack of states to
handle nested assume statements), since it needs to backtrack to the previous state after the completion
of execution. A rollback during a database transaction is another example of a non single-threaded
operation.
There are two common ways to guarantee single-threadedness. The first is to allow any state manipulation
in the language but detect violation of single-threadedness by performing a semantic analysis,
i.e., a kind of abstract interpretation [Sch85], such as using a linear type system [SS95], to detect these
violations during type-checking. The second approach, which we adopt in our framework, is to restrict
the syntax of the language in such a way that the state is guaranteed to always be single-threaded.
There is another serious problem with the above mentioned denotational semantics approach: to pass
the state through the operations of a program, we need to sequentialize all operations. This restriction
is not a good idea for commutative operations, since we may miss some optimization opportunities. For
example, for ranging over R, the assignment x := r can be evaluated during the
scanning of R in two ways; one will set x to 1 at the end and the other to 2, depending on the way
R is scanned. Both solutions are valid and should be considered by the optimizer. We address this
problem by generating all possible solutions generated by these alternatives at a first stage. It is up
to the optimizer to select the best one at the end (by performing a cost analysis). Even though this
approach may generate an exponential number of solutions when applied to constant data, in practice
it does not do so, provided that the query does not contain a large number of union operations. Each
alternative solution corresponds to a typically different final database state. At the end, only one solution
is chosen by the optimizer. So there is a "collection semantics" for queries - for each query there is a
collection of possible correct answers. We decided to consider all solutions instead of reporting an error
when more than one solution exists because most useful programs fall into this category. Considering all
alternatives during query optimization is necessary for proving program equivalences (such as proving
that ff(1); f(2)g is equal to ff(2); f(1)g). Only at the plan generation phase, i.e., when optimization is
completed, should we select an alternative.
Our framework is inspired by Ohori's work on representing object identity using monads [Oho90].
Our contribution is that we mix state transformation with sets and bags and that we apply this theory
to a database query language that satisfies strong normalization properties. Normalization removes any
unnecessary state transformation, thus making our approach practical for optimizing programs in our
object-oriented calculus. The most important contribution of our work is the development of a method
to map programs in which state transformation cannot be removed by normalization into imperative
loops, much in the same way one could express these programs using a regular imperative language, such
as C. The resulting programs are as efficient as those written by hand.
The rest of the paper is organized as follows. Section 2 describes our earlier results on the monoid
comprehension calculus. Section 3 describes our object extensions to the monoid calculus. Section 4 proposes
a new monoid that captures object identity and side effects. Section 5 describes our framework for
handling object identity using denotational semantics. Section 6 addresses some practical considerations
when building an optimizer based on our framework. Section 7 presents a prototype implementation of
our framework. Finally, Section 8 extends our framework to capture database updates and discusses how
this theory can be applied to solve the view maintenance problem.
Background: The Monoid Comprehension Calculus
This section summarizes our earlier work on the monoid calculus. A more formal treatment is presented
elsewhere [FM98, FM95b, FM95a].
The monoid calculus is based on the concept of monoids from abstract algebra. A monoid of type T
is a pair (\Phi; Z \Phi ), where \Phi is an associative function of type T \Theta T !T (i.e., a binary function that takes
two values T and returns a value T ), called the accumulator or the merge function of this monoid, and
Z \Phi of type T , called the zero element of the monoid, is the left and right identity of \Phi. That is, the zero
element satisfies Z \Phi \Phi every x. Since the accumulator function uniquely identifies
a monoid, we will often use the accumulator name as the monoid name. Examples of monoids include
([;f g) for sets, (];ffgg) for bags, (++; [ ]) for lists, (+; 0), ( ; 1), and (max; 0) for integers, and (-; false)
and (-; true) for booleans. The monoids for integers and booleans are called primitive monoids because
they construct values of a primitive type. The set, bag, and list monoids are called collection monoids.
Each collection monoid (\Phi; Z \Phi ) requires the additional definition of a unit function, U \Phi , which, along
with merge and zero, allows us the construction of all possible values of this type. For example, the unit
function for the set monoid is -x: fxg, that is, it takes a value x as input and constructs the singleton set
fxg as output. All but the list monoid are commutative, i.e., they satisfy x \Phi
y. In addition, some of them ([, -, and max) are idempotent, i.e., they satisfy x \Phi
A monoid comprehension over the monoid \Phi takes the form \Phif e j r g. Expression e is called the head
of the comprehension. Each term r i in the term sequence is called a qualifier,
and is either a generator of the form v / e 0 , where v is a range variable and e 0 is an expression (the
generator domain) that constructs a collection, or a filter p, where p is a predicate. We will use the
shorthand f e j r g to denote the set comprehension [f e j r g.
A monoid comprehension is defined by the following reduction rules:
(\Omega is a collection monoid,
possibly different than \Phi)
ae
U \Phi (e) if \Phi is a collection monoid
e otherwise (D1)
Rules (D2) and (D3) reduce a comprehension in which the leftmost qualifier is a filter, while Rules (D4)
through (D6) reduce a comprehension in which the leftmost qualifier is a generator. The let-statement
in (D5) binds v to e 0 and uses this binding in every free occurrence of v in \Phif e j r g.
The calculus has a semantic well-formedness requirement that a comprehension be over an idempotent
or commutative monoid if any of its generators are over idempotent or commutative monoids. For
example, is not a valid monoid comprehension, since it maps a set monoid (which
is both commutative and idempotent) to a list monoid (which is neither commutative nor idempotent),
while +fx j x / ff1; 2gg g is valid (since both are commutative). This requirement can be easily
checked during compile time.
When restricted to sets, monoid comprehensions are equivalent to set monad comprehensions [BLS
which capture precisely the nested relational algebra [FM95b]. Most OQL expressions have a direct translation
into the monoid calculus. For example, the OQL query
select distinct hotel.price
from hotel in ( select h from c in Cities, h in c.hotels
exists r in hotel.rooms: r.bed
and hotel.name in ( select t.name from s in States, t in s.attractions
is translated into the following comprehension:
f hotel.price - hotel / f h - c / Cities, h / c.hotels, c.name="Arlington" g,
-f r.bed num=3 - r / hotel.rooms g,
We use the shorthand x j u to represent the binding of the variable x with the value u. The meaning
of this construct is given by the following reduction:
where e[u=x] is the expression e with u substituted for all the free occurrences of x (i.e., e[u=x] is
equivalent to let In addition, as a syntactic sugar, we allow irrefutable patterns in place of
lambda variables, range variables, and variables in bindings. Patterns like these can be compiled away
using standard pattern decomposition techniques [PJ87]. For example, is
equivalent to f a:fst a:snd:fst retrieves the first/second element
of a pair. Another example is -(x; which is a function that takes three parameters and
returns their sum. It is equivalent to -a: a:fst a:snd:fst
The monoid calculus can be put into a canonical form by an efficient rewrite algorithm, called the
normalization algorithm. The evaluation of these canonical forms generally produces fewer intermediate
data structures than the initial unnormalized programs. Moreover, the normalization algorithm improves
program performance in many cases. It generalizes many optimization techniques already used in relational
algebra, such as fusing two selections into one selection. The following are the most important
rules of the normalization algorithm:
reduction (N2)
pred j r for idempotent \Phi (N4)
The soundness of the normalization rules can be proved using the definition of the monoid comprehension
[FM98]. Rule (N3) flattens a comprehension that contains a generator whose domain is another
comprehension (it may require variable renaming to avoid name conflicts). Rule (N4) unnests an existential
quantification.
For example, the previous OQL query is normalized into:
c.name="Arlington", r.bed num=3, s.name="Texas", t.name=h.name g
by applying Rule (N3) to unnest the two inner set comprehensions and Rule (N4) to unnest the two
existential quantifications.
3 The Object Monoid Calculus
In this section, the monoid calculus is extended to capture object identity. The extended calculus is
called the object monoid calculus. For example, one valid object-oriented comprehension is
which first creates a list containing two new objects (new(1) and new(2)). Then, variable x ranges over
this list and the state of x is incremented by one (by x := !x+1). Here x is a reference to the object x while
!x returns the state of the object x. The result of this computation is the list [2; 3]. An object-oriented
comprehension is translated into a state transformer that propagates the object heap (which contains
bindings from object identities to object states) through all operations in an expression, and changes
it only when there is an operation that creates a new object or modifies the state of an object. This
translation captures precisely the semantics of object identity without the need of extending the base
model. It also provides an equational theory that allows us to do valid optimizations for object-oriented
queries.
We introduce a new type constructor obj(T ) that captures all objects with states represented by values
of type T . In addition, we extend the monoid calculus with the following polymorphic operations [Oho90]
(in the style of SML [Pau91]):
ffl new, of type T !obj(T ). Operation new(s) creates a new object with state s;
ffl !, of type obj(T )!T . Operation !e dereferences the object e (it returns the state of e);
ffl :=, of type obj(T )!T !bool. Operation e := s changes the state of the object e to s and returns
true.
Many object-oriented languages have different ways of constructing and manipulating objects. For exam-
ple, OQL uses object constructors to create objects and does not require any explicit object dereferencing
operator. These language features can be easily expressed in terms of the primitives mentioned above.
When giving formal semantics, our primitives are a better choice since they do not deal with details
about object classes, inheritance, etc. When optimizing a real object-oriented language, though, these
"details" should be addressed properly.
Our object-oriented operators may appear at any place in a monoid comprehension. The following are
examples of comprehensions with object operations (called object comprehensions): (Recall that v j e
defines the new variable name v to be a synonym of the value e while e1 := e2 changes the state of the
object whose OID is equal to the value of e1 into the result of e2 .)
true
The first example indicates that different objects are distinct while the second example indicates that
objects may have equal states. The ninth example computes the cardinality of the set f1; 2; 2;
and indicates that duplicates in a set do not count. The last example is the most interesting
one: since there is no order in the set f1; 2; 2; 3g, there are as many results as the permutations of the
set (namely, f1; 3; 6g; f1; 4; 6g; f2; 3; 6g; f2; 5; 6g; f3; 4; 6g and f3; 5; 6g). We consider all these results
valid but the optimizer will construct a plan at the end that generates one result only. A more practical
example is the query
!e:department := !(!e:manager):department g
which sets the department of each employee to be the department of the employee's manager.
4 The State Transformation Monoid
One way of handling side effects in denotational semantics is to map terms that compute values of type
into functions of type S ! (T \Theta S), where S is the type of the state in which all side effects take
place. That is, a term of type T is mapped into a function that takes some initial state s0 of type S as
input and generates a value of type T and a new state s1 . In denotational semantics, these functions
of type \Theta S) are called state transformers [Wad92, Wad90]. If a term performs side effects, the
state transformer maps s0 into a different state s1 to reflect these changes. Otherwise, the state remains
unchanged. For example, the constant integer, 3, is mapped into the state transformer -s:(3; s) which
propagates the state as is. Note that not only the new state, but the computed value as well, may depend
on the input state. That way, side effects are captured as pure functions that map states into new states.
Unfortunately, if we add side effects to our calculus, our programs may have multiple interpretations,
mainly due to the commutativity of monoids, which results to non-determinacy. It is highly desirable
to capture all these interpretations and let the optimizer select the best one at the end. To handle
this type of non-determinism in a functional way, given an input state, our state transformer must be
able to return multiple values and multiple states, or in other words, it must be able to return multiple
value-states pairs. Consequently, our state transformer should be of type S ! set(T \Theta S) to capture all
possible interpretations of a program.
Transformer) The state transformer \Phi(T ) of a type T and a state type S is the
As we will show shortly, given a monoid \Phi for a type T , we can always define a primitive monoid, \Phi,
for the state transformer \Phi(T ). In contrast to the monoids described earlier, this monoid must be a
higher-order monoid, i.e., instances of this monoid are functions.
The definition of \Phi described below is very important for proving the correctness of various transformation
rules. It can be safely skipped if the reader is not interested in such proofs. We will first
present a simple definition that works well for non-commutative, non-idempotent, monoids and then we
will extent it to capture all monoids.
Transformation Monoid) The state transformation monoid of a monoid (\Phi; Z \Phi )
is the primitive monoid (\Phi; Z \Phi ), defined as follows:
That is, Z \Phi is a function that, when applied to a state s of type S, it constructs a value f(Z \Phi ; s)g. The
merge function \Phi propagates the state from the first state transformer to the second and merges the
resulting values using \Phi. It is easy to prove that for n ? 0:
State monoid comprehensions are simply monoid comprehensions over a state transformation monoid.
For example,
+f -s: f(v; s)g j v / [1; 2; 3] g s0
((-s: (-s: f(3; s)g)) s0
The state can be of any type. Suppose that the state is an integer that counts list elements. Then the
following state comprehension increments each element of the list [1; 2; 3] and uses the state to count the
list elements:
++f -s: f([v
((-s:
For the state transformer monoid \Phi to be effective, it must have the same properties as the monoid
\Phi. Otherwise, it may introduce semantic inconsistencies. That is, if \Phi is a commutative or idempotent,
so must be \Phi. To capture this property, we redefine \Phi to behave in the same way as \Phi:
where F and G are defined as follows:
ae
if \Phi is commutative
f g otherwise
ae
g. That is, if \Phi is commutative, then OE 1 \Phi OE 2 has two interpretations:
one propagates the state from OE 1 to OE 2 and the other from OE 2 to OE 1 (this is the contribution of the factor
G). If \Phi is idempotent, then all elements of x are removed from y when x \Phi y is evaluated. For example,
for an integer state that counts set elements we have:
then f(Z \Phi ; s1)g
else
where G propagates the state from right to left and it is equal to f(f1g; s 1)g. That is, the counter
counts the list element 1 once, even though it appears twice.
We prove in the Appendix (Theorem 1) that, under these extensions, not only \Phi is a valid monoid,
but it is also compatible with the \Phi monoid (i.e., if \Phi is commutative and/or idempotent, then so is \Phi).
Capturing Object
So far we have not discussed what the state type, S, should be. Indeed, S can be of any type. If we
wished to capture database updates, for example, the state would have to be the entire database. Here,
though, we are interested in capturing object identity. The state s of a state transformer that captures
object identity can be viewed as a pair (L; n) [Oho90]. The value L, called the object store, maps objects
of type T (i.e., instances of the type obj(T into values of type T . That is, it maps OIDs into object
states. The integer n is a counter used for computing the next available OID.
There are four primitives to manipulate the state, which have the following types:
lookup
ref T (n) maps the integer n into an OID that references an object of type T , emptyStore is the initial
object store value (without any objects), ext T (L; o; v) extends the object store L with the binding from
the OID o to the state value v, and lookup T (L; o) accesses the object store L to retrieve the state of the
object with OID o. For example,
lookup int (ext int (L; ref int (100); 1); ref int (100))
returns 1, the state of the object (of type obj(int)) with OID 100.
The abover primitives satisfy the following equivalences:
ext
ext
lookup
lookup
Figure
1 presents the denotational semantics of the most important constructs of the monoid calculus
without the object extensions (i.e., without new, !, and :=). Without the object extensions, the state
Figure
1: Denotational Semantics of the Monoid Calculus Using State Transformers
should be propagated as is, not changed. The semantics of the object extensions form a non-standard
interpretation and is given later. In these equations, the semantic brackets, give the meaning of the
syntax enclosed by the brackets in terms of the pure monoid comprehension calculus. The type of
is the domain of all type-correct terms in the object monoid calculus that have a
monotype T (a non-function type). In general, if e is of type t, then the type of [[e]], denoted by t , is
defined as follows:
Recall also that a state s of type S is a pair (L; n); but for convenience, we will only use the notation
(L; n) whenever either of the components L or n needs to be accessed.
Rule (S7) in Figure 1 handles functional terms. For example,
is translated into f(-v:-s 0 :f(v; s 0 )g; s)g when applied to a state s. Rule (S8) assumes a call-by-value
interpretation [Rea89]: e2 in e1(e2) is evaluated before e1 is applied. Rules (S13) and (S14) translate
monoid comprehensions. Rule (S14) uses a monoid comprehension over the monoid \Phi to propagate
the state through every element of the collection u. Notice that the comprehension head here is a state
transformer and all these state transformers are merged using \Phi. This comprehension is valid for any type
of collection u, since the monoid \Phi is compatible with the monoid \Phi. This higher-order comprehension
is necessary since the term u may modify the object store each time a new object is constructed. In most
cases, though, the state is propagated but not changed. If it is not changed, the following rule can be
applied to eliminate state propagation (the correctness of this rule is straightforward and is omitted):
The following rules give the denotational semantics of the object extensions:
The operation new(e) takes an available OID, n, and uses it as the OID of the new object with state e.
In addition, the object store is extended with the binding from ref T (n) to the state value. Rule (S16),
instead of destructively changing the object store, extends the store with a new binding from the OID
of the left part of := to the value of the right part. Rule (S17) simply looks up the object store for the
requested OID.
The Appendix provides a proof of a theorem (Theorem 2) that indicates that, if we have no state
modification operations in the calculus, the output state is the same as the input state and that the
canonical form we derive after normalization is similar to the canonical form we get in the pure monoid
calculus. This theorem basically guarantees that, even though state transformation sequentializes all
operations, if a program does not perform any state modification, the normalization algorithm can
remove all unnecessary state transformations.
The following are examples of translation and normalization of some terms in the object monoid
calculus (where the state s is (L; n)):
A more interesting example is incrementing all elements of a set of integers (of type set(obj(int))):
Set cardinality can be expressed with the help of a counter x:
We can now support bags and sets of objects without inconsistencies. For example, ffnew(1); new(2)gg is
a valid expression and it is equal to ffnew(2); new(1)gg:
Similarly, assignments can be freely moved inside set constructions:
6 Translating Object Comprehensions into Efficient Program
We have seen in the previous section that object-oriented comprehensions can be expressed in term
of the basic monoid comprehension calculus using the denotational semantics rules of Figure 1. The
resulting programs are usually inefficient since they manipulate the state even when the state is not used
at all. These inefficiencies can be reduced with the help of the normalization algorithm and the algebraic
equalities for the object primitives (Rules (O1) through (O5)). In fact, most parts of the resulting
programs can be normalized to first-order programs that look very similar to the programs one might
write using the four object primitives directly. This section is focused on the efficient execution of the
programs that cannot be reduced to first-order programs by the normalization algorithm.
When translating the object monoid calculus to the basic calculus we consider all possible alternatives
due to the commutativity of some operations. This is absolutely necessary for proving program equalities.
After the normalization is completed and the algebraic equalities have been used to check program
equivalences, the optimizer can safely discard all but one alternative. The following function, choose,
selects a random alternative. In practice the choice can be made with the help of a cost function. Given a
program P in the object calculus and an initial state s0 , our system evaluates choose(normalize([[P
That is, P is first translated, then normalized, and finally an alternative is selected (which is a pair of a
value and a state). The choose function is defined as follows:
The rules in Figure 1 guarantee that there will always be at least one choice. The only case missing from
the above rules is choosing an alternative from a state monoid comprehension. These are the state monoid
comprehensions that cannot be removed by the normalization algorithm. Efficient implementation of such
comprehensions is very crucial when considering system performance. The default implementation of a
state monoid comprehension is a loop that creates a state transformer (i.e., a function) at each iteration
and then composes these state transformers using the merge function of the state transformation monoid.
This approach is obviously inefficient and we would like to find better algorithms to evaluate state
comprehensions faster. One possible solution is to actually compile these comprehensions into loops with
updates, like the ones found in imperative languages. In particular,
choose( \Phif -s: e j v /R g s0 )
is translated into the following loop (in a Pascal-like syntax):
initialize the state
res := Z \Phi ; initialize the result value
for each v in R do
retrieve one of the possible value-state pairs
res := res \Phi x.fst; update the result
s := x.snd; update the state
return
For example, as we have previously shown, set cardinality is translated into the following state monoid
comprehension:
which is mapped into the following loop:
res := 0;
for each a in R do
f res := max(res; lookup(L; ref(n))
return (res; (L; n));
Even though this loop has the right functionality, it is still inefficient since it manipulates the object
store L at every step of the loop.
The resulting programs can be implemented efficiently if the store is a global array and the object
primitives are programs that directly manipulate the global array. Rules (S15) and (S16) are single-threaded
(since no object creation is undone at any point). Rule (S17) can enforce a single array pointer
by fetching the state first (using lookup) and then by returning the pointer to L. Consequently, there
will always be only one pointer to the store, and therefore, this store can be implemented as a global
store and all updates as in-place (destructive) updates.
Any primitive operation on the object store can be done destructively. More specifically, let Store be
the global array mentioned above whose domain of elements is of any type (e.g., Store can be defined as
void[ ] in C). The object primitives can be implemented as follows:
lookup
where s 0 is the implementation of s and (s1 ; evaluates the statements in that
order and returns e. For example, lookup T (ext T (ext T (s; x; a);
Under this implementation, the resulting programs after state transformation can be evaluated as efficiently
as real object-oriented programs.
For example, the previous loop that corresponds to set cardinality becomes:
res := 0;
for each a in R do
f res := max(res,Store[n]+1);
return
if we use the global array implementation of the object primitives.
7 Implementation
We have already built a prototype implementation of our framework. The translations of the OODB
queries shown in this paper were generated by this program. The source code is available at:
http://www-cse.uta.edu/~fegaras/oid/
The following examples illustrate the translation of five object queries by our system. The notation
used in these examples is a little bit different than that used in our theoretical framework. Here
executes the statements in sequence and returns the value of v, loop(iterate(x;
executes the statements for each value x of X, access(n) returns the value of Store[n], and
update(n; v) evaluates Store[n] := v. Every object query of type T is translated into an expression of
type T \Theta (void \Theta int). The T value is the returned value, the void value corresponds to the state which is
ignored, and the int value is the new OID counter (we assume that the value of the OID counter before
the execution of the query is equal to n). For example, if an object query e does not contain any object
operation, the translation would be pair(e,pair(null,n)), where pair constructs a pair of two values and
null is of type void.
assign(res,0),
loop(iterate(e,E),
assign(res,max(res,plus(access(n),e))),
assign(s,pair(block(update(n,plus(access(n),e)),null),snd(s)))),
assign(e,struct(bind(name,project(deref(e),name)),
assign(res,0),
loop(iterate(e,Employees),
assign(res,plus(res,if(gt(project(access(e),salary),50000),1,0))),
assign(s,if(gt(project(access(e),salary),50000),
bind(salary,times(project(access(e),salary),1.08)))),
snd(s)),
Figure
2: Denotational Semantics of Database Updates
The first query, new(1), assigns 1 to Store[n] (the store for the next available OID), sets n to n + 1, and
returns the old value of n. The second query executes Store[x] := Store[x] while the third query,
which corresponds to +f !x generates a block that contains both Store[n] := 1
(the old value) and Store[n] := 2 (the new value) that are executed in sequence. The fourth query
generates a state monoid comprehension, which, in turn, is translated into a loop. It returns the sum of
all elements in E. The last query gives an 8% raise to all employees earning over $50000 and returns the
number of employees who got this raise (for simplicity, we assume that each employee has a name and a
salary only).
8 Database Updates and View Maintenance
The monoid state transformer can also be used for expressing destructive database updates in terms
of the basic algebra. That way database updates and queries can be optimized in a single framework.
Let db be the current database state and let DB be its type. Typically, db is the aggregation of all
persistent objects in an application. Following the analysis of the previous section, we want to translate
updates into the monoid algebra in such a way that the propagated state (i.e. the entire database) is
single-threaded. Database updates can be captured using the state transformer
which propagates, and occasionally changes, the entire database. That is, updates are captured as
pure functions that map the existing database into a new database. To make this approach practical,
we can define a set of update primitives to express updates, similar to the ones for object updates.
These primitives, even though have a pure interpretation, they have an efficient implementation. This
approach does not require any significant extension to the formal framework, and normalization and
query optimization can be used as is to improve performance.
Database updates can be expressed using the following comprehension qualifiers [Feg]: qualifier
path := u destructively replaces the value stored at path by u, qualifier path += u merges the singleton
u with path, and qualifier path -= u deletes all elements of path equal to u. The := qualifier is
the only fundamental construct since the other two can be defined as follows:
path += u j path := (U \Phi (u) \Phi path)
path -= u j path := \Phif a j a/path; a 6= ug
For example, the comprehension
c.hotel num += 1 g
inserts a new hotel in Arlington and increases the total number of hotels.
The denotational semantics of an expression e that may contain updates is ae is a binding
list that binds range variables and s is the current database state. Rules (S1) through (S13) in Figure 1
need to be slightly modified to include the binding list ae: every mapped into
example, Rule (S13) in Figure 1 is mapped into the Rule (S13) in Figure 2. Rule (S14) in Figure 1,
though, is mapped into the Rule (S14) in Figure 2, which changes ae to include the binding from v (the
range variable) to v1 (the generator domain). The binding list ae is used in Rule (S15), which gives the
semantics of an update qualifier.
Expression reconstructs the database state s by copying all its components except the
one reached by path, which is replaced by v. It is defined as follows:
and ae(v) is of type T \Phi (t)
where s is of type
where path is a possibly empty path (a sequence of projections). Expression [s=db]ae(v) replaces all
occurrences of db in ae(v) by s. The second rule above applies when the state is a collection type while
the third rule applies when the state is a record. The second rule uses the condition to force
the comprehension, which reconstructs the collection value, to replace the element v (bound before the
update) by the new value e.
For example, at the point of the update c.hotels += . in the previous example, the binding list ae
binds c into db.cities. In that case,
The predicate guarantees that only the hotels of the city c (i.e. Arlington) are changed.
One implementation of P[[path]] ae v s is path := v, which destructively modifies the part of the database
reached by path into v. But we can do better than that. As it was explained in the introduction, our
motivation for using denotational semantics is not to simply give a formal meaning to the destructive
constructs but to use the semantics as the actual translation of these constructs and, more importantly,
to use this translation in query optimization. To do so, we need to define the inverse function, I[[s]],
of P[[path]]. The P[[path]] function is needed to translate updates into programs so that they can be
optimized, while the inverse function, I[[s]], is needed to generate destructive updates after optimization.
Given a reconstruction of a state s, say s 0 , which is a copy of s except of a number of places where new
values v i are used instead, I[[s]] s 0 generates a list of destructive updates of the form path i := v i such
that the composition of all P[[path i constructs the state s 0 . The function I[[e]] is defined as follows:
(++ is list concatenation)
For example, returns [c:hotels := e], which is the original update. Under this
approach, first, the semantics of a program is given in terms of state transformers and the P[[path]] state
reconstructions are expanded; then, normalization and query optimization take place, which eliminate
unnecessary updates; finally, the reconstructed state is transformed into a number of destructive updates
to the database using I[[e]]. We leave it for a future work to demonstrate that this framework is as
effective as the framework for handling object updates.
Our optimization framework for destructive updates can also be used to handle the view maintenance
problem [GM95, CKL97]. In its general form, a view is a function f from the database state db to
some value domain. A materialized view, v / f(db), stores the view f into the database component, v.
Recognizing cases where v can be used directly instead of computing the possibly expensive view f in
a query becomes easier after the query is normalized and unnested (since unnesting flattens a query).
When the database is updated, the new database becomes u(db), where u is the functional interpretation
of the update. Thus, the view maintenance problem is equivalent to expressing f(u(db)) in terms of
the materialized view v (this is the view recognition problem mentioned above), and transforming all
the update primitives in f(u(db)) to apply over v instead of db (easily attainable by expressing our
state transformations in terms of the update primitives and normalizing the resulting forms). That
way, f(u(db)) will compute the new materialized view in terms of the old one, v. If we apply the same
techniques we used for database updates, the new materialized view f(u(db)) can be generated efficiently
using destructive updates to v. We are planning to show in a future research that this framework not
only requires minimal extensions to our basic framework, but is practical and effective as well.
9 Conclusion
We have presented a formal framework for handling object updates during OODB query optimization.
Even though this framework was applied to the monoid comprehension calculus, it can be adapted to
work with any optimization framework because many types of object manipulation constructs can be
expressed in terms of the basic language constructs by using denotational semantics. Consequently,
query optimization applicable to the basic language constructs can be used with minimal changes to
remove inefficiencies due to the compositional way of translating programs in denotational semantics. If,
in addition, we implement the object store primitives using side effects, the resulting programs can be
evaluated as efficiently as programs written by hand.
Acknowledgements
: The author is grateful to David Maier for helpful comments on the paper. This
work is supported in part by the National Science Foundation under grants IRI-9509955 and IIS-9811525.
--R
Rationale for the Design of Persistence and Query Processing Facilities in the Database Programming Language O
Comprehension Syntax.
The Object Database Standard: ODMG-93
A General Framework for the Optimization of Object-Oriented Queries
Efficient Evaluation of Aggregates on Bulk Types.
Nested Queries in Object Bases.
A Recursive Algebra and Query Optimization for Nested Relations.
Supporting Multiple View Maintenance Policies.
Query Optimization in Revelation
A Uniform Calculus for Collection Types.
An Experimental Optimizer for OQL.
Query Unnesting in Object-Oriented Databases
An Algebraic Framework for Physical OODB Design.
Towards an Effective Calculus for Object Query Languages.
Optimizing Object Queries Using an Effective Calculus.
Maintenance of Materialized Views: Problems
Advanced Query Processing in Object Bases Using Access Support Relations.
A Transformation-Based approach to Optimizing Loops in Database Programming Languages
Representing Object Identity in a Pure Functional Language.
A Keying Method for a Nested Relational Database Management System.
ML for the working programmer.
Peyton Jones.
Elements of Functional Programming.
Detecting Global Variables in Denotational Specifications.
Extending Functional Database Languages to Update Completeness.
Comprehending Monads.
The Essence of Functional Programming.
--TR
Detecting global variables in denotational specifications
A recursive algebra and query optimization for nested relations
Comprehending monads
Advanced query processing in object bases using access support relations
Rationale for the design of persistence and query processing facilities in the database programming language O++
Query optimization in revelation, an overview
Representing object identity in a pure functional language
ML for the working programmer
Elements of functional programming
A transformation-based approach to optimizing loops in database programming languages
A general framework for the optimization of object-oriented queries
The essence of functional programming
Comprehension syntax
Towards an effective calculus for object query languages
Supporting multiple view maintenance policies
Query unnesting in object-oriented databases
Optimizing object queries using an effective calculus
A Keying Method for a Nested Relational Database Management System
Extending Functional Database Languages to Update Completeness
An Algebraic Framework for Physical OODB Design
--CTR
Hiroaki Nakamura, Incremental computation of complex object queries, ACM SIGPLAN Notices, v.36 n.11, p.156-165, 11/01/2001
G. M. Bierman, Formal semantics and analysis of object queries, Proceedings of the ACM SIGMOD international conference on Management of data, June 09-12, 2003, San Diego, California
Leonidas Fegaras , David Maier, Optimizing object queries using an effective calculus, ACM Transactions on Database Systems (TODS), v.25 n.4, p.457-516, Dec. 2000 | query optimization;monoid comprehensions;denotational semantics;object identity |
325884 | Using Model Trees for Classification. | Model trees, which are a type of decision tree with linear regression functions at the leaves, form the basis of a recent successful technique for predicting continuous numeric values. They can be applied to classification problems by employing a standard method of transforming a classification problem into a problem of function approximation. Surprisingly, using this simple transformation the model tree inducerM5 , based on Quinlans M5, generates more accurate classifiers than the state-of-the-art decision tree learner C5.0, particularly when most of the attributes are numeric. | Introduction
Many applications of machine learning in practice involve predicting a "class" that
takes on a continuous numeric value, and the technique of model tree induction has
proved successful in addressing such problems (Quinlan, 1992; Wang and Witten,
1997). Structurally, a model tree takes the form of a decision tree with linear
regression functions instead of terminal class values at its leaves. Numerically-
valued attributes play a natural role in these regression functions, while discrete
attributes can also be handled-though in a less natural way. This is the converse
of the classical decision-tree situation for classification, where discrete attributes
play a natural role. Prompted by the symmetry of this situation, we wondered
whether model trees could be used for classification. We have discovered that they
can be turned into classifiers that are surprisingly accurate.
In order to apply the continuous-prediction technique of model trees to discrete
classification problems, we consider the conditional class probability function and
seek a model-tree approximation to it. During classification, the class whose model
tree generates the greatest approximated probability value is chosen as the predicted
class.
The results presented in this paper show that a model tree inducer can be used
to generate classifiers that are significantly more accurate than the decision trees
produced by C5.0. 1 The next section explains the method we use and reviews the
features that are responsible for its good performance. Experimental results for
thirty-three standard datasets are reported in Section 3. Section 4 briefly reviews
related work. Section 5 summarizes the results.
2. Applying model trees to classification
Model trees are binary decision trees with linear regression functions at the leaf
nodes: thus they can represent any piecewise linear approximation to an unknown
function. A model tree is generated in two stages. The first builds an ordinary
decision tree, using as splitting criterion the maximization of the intra-subset variation
of the target value. The second prunes this tree back by replacing subtrees
with linear regression functions wherever this seems appropriate. Whenever the
model is used for prediction a smoothing process is invoked to compensate for the
sharp discontinuities that will inevitably occur between adjacent linear models at
the leaves of the pruned tree. Although the original formulation of model trees had
linear models at internal nodes that were used during the smoothing process, these
can be incorporated into the leaf models in the manner described below.
In this section we first describe salient aspects of the model tree algorithm. Then
we describe the procedure, new to this paper, by which model trees are used for
classification. Some justification for this procedure is given in the next subsection,
following which we give an example of the inferred class probabilities in an artificial
situation in which the true probabilities are known.
2.1. Model-tree algorithm
The construction and use of model trees is clearly described in Quinlan's (1992)
account of the M5 scheme. An implementation called M5 0 is described by Wang
and Witten (1997) along with further implementation details. The freely available
version 2 of M5 0 we used for this paper differs from that described by Wang and
Witten (1997) only in its improved handling of missing values, which we describe
in the appendix. 3 There were no other changes, and no tuning of parameters.
It is necessary to elaborate briefly on two key aspects of model trees that will
surface during the discussion of experimental results in Section 3. The first, which
is central to the idea of model trees, is the linear regression step that is performed
at the leaves of the pruned tree. The variables involved in the regression are the
attributes that participated in decisions at nodes of the subtree that has been
pruned away. If this step is omitted and the target is taken to be the average target
value of training examples that reach this leaf, then the tree is called a "regression
tree" instead.
The second aspect is the smoothing procedure that, in the original formulation,
occurred whenever the model was used for prediction. The idea is first to use the
leaf model to compute the predicted value, and then to filter that value along the
path back to the root, smoothing it at each node by combining it with the value
predicted by the linear model for that node. Quinlan's (1992) calculation is
USING MODEL TREES FOR CLASSIFICATION 3
is the prediction passed up to the next higher node, p is the prediction
passed to this node from below, q is the value predicted by the model at this node,
n is the number of training instances that reach the node below, and k is a constant.
Quinlan's default value of used in all experiments below.
Our implementation achieves exactly the same effect using a slightly different
representation. As a final stage of model formation we create a new linear model
at each leaf that combines the linear models along the path back to the root, so
that the leaf models automatically create smoothed predictions without any need
for further adjustment when predictions are made. For example, suppose the model
at a leaf involved two attributes x and y, with linear coefficients a and b; and the
model at the parent node involved two attributes y and z:
We combine these two models into a single one using the above formula:
z: (3)
Continuing in this way up to the root gives us a single, smoothed linear model
which we install at the leaf and use for prediction thereafter.
Smoothing substantially enhances the performance of model trees, and it turns
out that this applies equally to their application to classification.
2.2. Procedure
Figure
1 shows in diagrammatic form how a model tree builder is used for classifi-
cation; the data is taken from the well-known Iris dataset. The upper part depicts
the training process and the lower part the testing process.
Training starts by deriving several new data sets from the original dataset, one
for each possible value of the class. In this case there are three derived datasets,
for the Setosa, Virginica and Versicolor varieties of Iris. Each derived dataset
contains the same number of instances as the original, with the class value set to 1
or 0 depending on whether that instance has the appropriate class or not. In the
next step the model tree inducer is employed to generate a model tree for each of
the new datasets. For a specific instance, the output of one of these model trees
constitutes an approximation to the probability that this instance belongs to the
associated class. Since the output values of the model trees are only approximations,
they do not necessarily sum to one.
In the testing process, an instance of unknown class is processed by each of the
model trees, the result of each being an approximation to the probability that it
belongs to that class. The class whose model tree gives the highest value is chosen
as the predicted class.
Derived
datasets
Original
dataset
Predicted
class
Attributes Target
4.4, 3.0, .
4.7, 3.2, .
6.7, 3.1, .
5.8, 2.7, .01
Attributes Target
5.7, 3.0, . ?
Attributes Class
4.4, 3.0, .
4.7, 3.2, .
6.7, 3.1, .
5.8, 2.7, .
Versicolor
Virginica
instance
Model
trees
Attributes Target
Attributes Target
4.4, 3.0, .
4.7, 3.2, .
6.7, 3.1, .
5.8, 2.7, .00
4.4, 3.0, .
4.7, 3.2, .
6.7, 3.1, .
5.8, 2.7, .01
A: Setosa B: Virginica C: Versicolor
A
A
C: Versicolor
(5.7, 3.0, . )=0.93
f (5.7, 3.0, . )=0.05 (5.7, 3.0, . )=0.07
Figure
1. How M5 0 is used for classification
USING MODEL TREES FOR CLASSIFICATION 5
x
y
(a)
x 00.40.8y0.20.61
(c)
x 00.40.8y0.20.61
Figure
2. Example use of model trees for classification: (a) class probabilities for data generation;
(b) the training dataset; (c) inferred class probabilities
2.3. Justification
The learning procedure of M5 0 effectively divides the instance space into regions
using a decision tree, and strives to minimize the expected mean squared error between
the model tree's output and the target values of zero and one for the training
instances within each particular region. The training instances that lie in a particular
region can be viewed as samples from an underlying probability distribution
that assigns class values of zero and one to instances within that region. It is standard
procedure in statistics to estimate a probability distribution by minimizing
the mean square error of samples taken from it (Devroye, Gyoerfi and Lugosi, 1996;
Breiman, Friedman, Olshen and Stone, 1984).
2.4. Example
Consider a two-class problem in which the true class probabilities are linear functions
of two attributes x and y, p[classjx; y], as depicted in Figure 2a, summing
to 1 at each point. A dataset with 600 instances is generated randomly according
to these probabilities. To do this, uniformly distributed (x; y) values are chosen
and the probability at that (x; y) value is used to determine whether the instance
should be assigned to the first or the second class. The data generated is depicted
6 FRANK ET AL.
in Figure 2b, where the classes are represented by filled and hollow circles. It is
apparent that the density of filled circles is greatest at the lower left corner and
decreases towards the upper right corner; the converse is true for hollow circles.
When the data of Figure 2b is submitted to M5 0 it generates two model trees.
In this case the structure of the trees generated is trivial-they each consist of a
single node, the root. Figure 2c shows the linear functions f[classjx; y] represented
by the trees. As the above discussion intimates, they are excellent approximations
to the original class probabilities from which the data was generated.
The class boundary is the point of intersection of the two planes in Figure 2c, and
as this example illustrates, classifiers based on model trees are able to represent
oblique class boundaries. This is one reason why model trees produced by M5 0
outperform the univariate decision trees produced by C5.0. Another is that M5 0
smooths between regression functions at adjacent leaves of the model tree.
3. Experimental results
Our experiments are designed to explore the application of model trees to classification
by comparing their results with decision tree induction and linear regression,
and determining which of their components are essential for good performance.
Specifically, we address the following questions:
1. How do classifiers based on model trees compare to state-of-the-art decision
trees, and to classifiers based on simple linear regression?
2. How important are (a) the linear regression process at the leaves, and (b) the
smoothing process?
To answer the first question, we compare the accuracy of classifiers based on the
smoothed model trees generated by M5 0 with the pruned decision trees generated
by C5.0: we will see that M5 0 often performs better. However, the performance
improvement might conceivably be due to other aspects of the procedure: M5 0
converts a nominal attribute with n attribute values into binary attributes
using the procedure employed by CART (Breiman et al., 1984), and it generates one
model tree for each class. To test this we ran C5.0 using exactly the same encodings,
transforming each nominal attribute into binary ones using the procedure employed
by M5 0 and generating one dataset for each class, and then building a decision tree
for each dataset and using the class probabilities provided by C5.0 to arbitrate
between the classes. We refer to the resulting algorithm as C5.0 0 . We also report
results for linear regression (LR) using the same input/output encoding.
To investigate the second question, we first compare the accuracy of classifiers
based on model trees that are generated by M5 0 with ones based on smoothed
regression trees (SRT). As noted above, regression trees are model trees with constant
functions at the leaf nodes; thus they cannot represent oblique class bound-
aries. We apply the same smoothing operation to them as M5 0 routinely applies to
model trees. Then we compare the accuracy of classifiers based on the (smoothed)
model trees of M5 0 with those based on unsmoothed model trees (UMT). Because
USING MODEL TREES FOR CLASSIFICATION 7
Table
1. Datasets used for the experiments
Dataset Instances Missing Numeric Binary Nominal Classes
values (%) attributes attributes attributes
balance-scale 625
breast-w
glass (G2) 163
glass 214
heart-statlog 270
hepatitis
ionosphere
iris 150
letter 20000
pima-indians 768
segment 2310
sonar 208
vehicle 846
vote 435 5.6 0
waveform-noise 5000
anneal
audiology 226 2.0 0 61 8 24
australian
autos 205 1.1 15 4 6 6
breast-cancer 286 0.3
horse-colic
hypothyroid
german 1000
labor 57 3.9 8 3 5 2
lymphography 148
primary-tumor
sick 3772 5.5 7
soybean 683 9.8 0
vowel
a smoothed regression tree is a special case of a smoothed model tree, and an unsmoothed
tree is a special case of a smoothed tree, only very minor modifications
to the code for M5 0 are needed to generate SRT and UMT models.
3.1. Experiments
Thirty-three standard datasets from the UCI collection (Merz and Murphy, 1996)
were used in the experiments: they are summarized in Table 1. The first sixteen
involve only numeric and binary attributes; the last seventeen involve non-binary
nominal attributes as well. 4 Since linear regression functions were designed for
numerically-valued domains, and binary attributes are a special case of numeric
attributes, we expect classifiers based on smoothed model trees to be particularly
appropriate for the first group.
Table
2 summarizes the accuracy of all methods investigated. Results give the percentage
of correct classifications, averaged over ten ten-fold (non-stratified) cross-validation
runs, and standard deviations of the ten are also shown. The same folds
were used for each scheme. Results for C5.0 are starred if they show significant
improvement over the corresponding result for M5 0 , and vice versa. Throughout,
we speak of results being "significantly different" if the difference is statistically
significant at the 1% level according to a paired two-sided t-test, each pair of data
points consisting of the estimates obtained in one ten-fold cross-validation run for
the two learning schemes being compared.
Table
3 shows how the different methods compare with each other. Each entry
indicates the number of datasets for which the method associated with its column
was significantly more accurate than the method associated with its row.
3.2. Discussion of results
To answer the first question above, we observe from Table 2 that M5 0 outperforms
C5.0 in fifteen datasets, whereas C5.0 outperforms M5 0 in five. (These numbers
also appear, in boldface, in Table 3.) Of the sixteen datasets having numeric and
binary attributes, M5 0 is significantly more accurate on nine and significantly less
accurate on none; on the remaining datasets it is significantly more accurate on six
and significantly less accurate on five. These results show that classifiers based on
the smoothed model trees generated by M5 0 are significantly more accurate than the
pruned decision trees generated by C5.0 on the majority of datasets, particularly
those with numeric attributes.
Table
3 shows that C5.0 0 is significantly less accurate than C5.0 on twelve
datasets (first column, last row) and significantly more accurate on five (first row,
last column). It is significantly less accurate than M5 0 on seventeen datasets and
significantly more accurate on three. These results show that the superior performance
of M5 0 is not due to the change in input/output encoding.
We complete our discussion of the first question by comparing simple linear regression
(LR) to M5 0 and C5.0. Table 3 shows that LR performs significantly worse
than M5 0 on seventeen datasets and significantly worse than C5.0 on eighteen. LR
outperforms M5 0 on eleven datasets and C5.0 on fourteen. These results for linear
regression are surprisingly good. However, on some of the datasets the application
of linear regression leads to disastrous results and so one cannot recommend this
as a general technique.
To answer the second of the above two questions, we begin by comparing the
accuracy of classifiers based on M5 0 with ones based on smoothed regression trees
(SRT) to assess the importance of the linear regression process at the leaves (which
the former incorporates but the latter does not). Table 3 shows that M5 0 produces
significantly more accurate classifiers on twenty-three datasets and significantly less
accurate ones on only two. Compared to C5.0's pruned decision trees, classifiers
based on smoothed regression trees are significantly less accurate on fifteen datasets
USING MODEL TREES FOR CLASSIFICATION 9
Table
2. Experimental results: percentage of correct classifications, and standard deviation
balance-scale 77.6\Sigma1.0 86.4\Sigma0.7* 86.7\Sigma0.3 75.3\Sigma1.1 78.8\Sigma0.9 78.9\Sigma0.7
breast-w 94.5\Sigma0.3 95.3\Sigma0.3* 95.8\Sigma0.1 94.3\Sigma0.5 94.2\Sigma0.4 94.5\Sigma0.3
glass (G2) 78.7\Sigma2.1 81.8\Sigma2.2 70.4\Sigma0.4 75.5\Sigma1.7 79.3\Sigma2.3 78.8\Sigma2.2
glass 67.5\Sigma2.6 70.5\Sigma2.8 60.0\Sigma1.3 67.6\Sigma1.6 67.8\Sigma2.7 70.0\Sigma2.0
heart-statlog 78.7\Sigma1.4 82.2\Sigma1.0* 83.7\Sigma0.4 79.9\Sigma1.8 78.4\Sigma1.5 78.6\Sigma1.4
hepatitis 79.3\Sigma1.2 81.9\Sigma2.2* 85.6\Sigma1.5 79.6\Sigma1.5 78.8\Sigma3.0 79.7\Sigma1.1
ionosphere 88.9\Sigma1.6 89.7\Sigma1.2 86.6\Sigma0.5 88.2\Sigma0.7 87.3\Sigma1.0 88.9\Sigma1.6
iris 94.5\Sigma0.7 94.7\Sigma0.7 82.7\Sigma0.9 94.0\Sigma1.0 93.9\Sigma0.8 94.7\Sigma0.7
letter 88.0\Sigma0.2 90.3\Sigma0.1* 55.5\Sigma0.1 86.3\Sigma0.2 86.7\Sigma0.1 87.5\Sigma0.1
pima-indians 74.5\Sigma1.2 76.2\Sigma0.8* 77.2\Sigma0.5 75.7\Sigma1.0 72.0\Sigma0.7 74.5\Sigma1.2
segment 96.8\Sigma0.2 97.0\Sigma0.2 84.5\Sigma0.1 96.2\Sigma0.2 95.9\Sigma0.3 95.7\Sigma0.2
sonar 74.7\Sigma2.8 78.5\Sigma3.4* 75.6\Sigma1.8 78.0\Sigma2.4 75.8\Sigma2.7 74.7\Sigma2.8
vehicle 72.9\Sigma1.2 76.5\Sigma1.3* 75.7\Sigma0.5 70.9\Sigma1.2 69.3\Sigma1.2 72.0\Sigma1.0
vote 96.3\Sigma0.6 96.2\Sigma0.3 95.6\Sigma0.0 95.6\Sigma0.0 95.9\Sigma0.5 96.4\Sigma0.5
waveform-noise 75.4\Sigma0.5 82.0\Sigma0.2* 85.9\Sigma0.2 80.3\Sigma0.3 72.3\Sigma0.4 75.2\Sigma0.5
zoo 91.8\Sigma1.1 92.1\Sigma1.3 94.2\Sigma1.8 89.3\Sigma1.5 90.5\Sigma1.3 89.1\Sigma1.4
anneal 98.7\Sigma0.3 98.8\Sigma0.2 93.1\Sigma0.2 97.3\Sigma0.1 98.5\Sigma0.2 99.0\Sigma0.2
audiology 76.5\Sigma1.4 76.7\Sigma1.0 68.6\Sigma1.6 67.9\Sigma1.2 76.8\Sigma1.8 73.9\Sigma0.9
australian 85.3\Sigma0.5 85.8\Sigma0.9 51.1\Sigma3.6 85.7\Sigma0.7 82.8\Sigma0.9 83.8\Sigma1.1
autos 80.0\Sigma2.5* 74.4\Sigma1.9 59.0\Sigma1.5 70.0\Sigma2.2 71.7\Sigma1.8 75.6\Sigma1.7
breast-cancer 73.3\Sigma1.6* 69.6\Sigma2.3 70.0\Sigma1.5 72.9\Sigma1.0 67.5\Sigma2.4 68.8\Sigma1.7
heart-c 76.8\Sigma1.4 80.9\Sigma1.4* 85.0\Sigma0.4 79.7\Sigma1.6 76.3\Sigma1.3 78.8\Sigma1.6
heart-h 79.8\Sigma0.9 79.0\Sigma0.8 81.9\Sigma1.0 79.2\Sigma1.1 76.9\Sigma1.5 77.5\Sigma1.3
horse-colic 85.3\Sigma0.6 84.6\Sigma0.7 82.7\Sigma0.7 84.5\Sigma0.9 83.4\Sigma1.5 84.5\Sigma0.6
hypothyroid 99.5\Sigma0.0* 96.6\Sigma0.1 90.9\Sigma3.1 95.6\Sigma0.1 96.2\Sigma0.2 99.4\Sigma0.1
german 71.2\Sigma1.0 72.9\Sigma0.7* 75.4\Sigma0.6 74.1\Sigma0.9 69.9\Sigma0.8 71.6\Sigma1.4
kr-vs-kp 99.5\Sigma0.1* 99.4\Sigma0.1 94.0\Sigma0.1 98.5\Sigma0.1 99.3\Sigma0.1 99.4\Sigma0.1
labor 78.1\Sigma4.8 79.7\Sigma4.6 87.4\Sigma6.1 71.4\Sigma3.6 77.9\Sigma3.6 76.8\Sigma4.5
lymphography 75.4\Sigma2.8 79.8\Sigma1.4* 83.6\Sigma1.3 76.1\Sigma1.6 77.5\Sigma2.9 75.9\Sigma2.2
primary-tumor 41.8\Sigma1.3 45.1\Sigma1.6* 47.2\Sigma0.9 45.1\Sigma1.3 41.4\Sigma1.2 40.3\Sigma2.1
sick 98.8\Sigma0.1* 98.3\Sigma0.1 92.3\Sigma2.5 98.2\Sigma0.0 98.6\Sigma0.1 98.9\Sigma0.1
soybean 91.3\Sigma0.5 92.5\Sigma0.5* 87.3\Sigma0.6 88.4\Sigma0.5 91.3\Sigma0.5 92.3\Sigma0.5
vowel 79.8\Sigma1.3 81.7\Sigma1.1* 43.1\Sigma1.0 73.9\Sigma1.5 78.3\Sigma0.8 78.1\Sigma1.0
and significantly more accurate on five. These results show that linear regression
functions at leaf nodes are essential for classifiers based on smoothed model trees
to outperform ordinary decision trees.
Finally, to complete the second question, we compare the accuracy of classifiers
based on M5 0 with classifiers based on unsmoothed model trees (UMT). Table
3 shows that M5 0 produces significantly more accurate classifiers on twenty-five
datasets and significantly less accurate classifiers on only one. Comparison with
C5.0's pruned decision trees also leads to the conclusion that the smoothing process
is necessary to ensure high accuracy of model-tree based classifiers.
Table
3. Results of paired t-tests (p=0.01): number indicates how
often method in column significantly outperforms method in row
LR
4. Related work
Neural networks are an obvious alternative to model trees for classification tasks.
When applying neural networks to classification it is standard procedure to approximate
the conditional class probability functions. Each output node of a neural
network approximates the probability function of one class. In contrast to neural
networks where the probability functions for all classes are approximated by a single
network, with model trees it is necessary to build a separate tree for each class.
Model trees offer an advantage over neural networks in that the user does not have
to make guesses about their structure and size to obtain accurate results. They
can be built fully automatically and much more efficiently than neural networks.
Moreover, they offer opportunities for structural analysis of the approximated class
probability functions, whereas neural networks are completely opaque.
The idea of treating a multi-class problem as several two-way classification prob-
lems, one for each possible value of the class, has been been applied to standard
decision trees by Dietterich and Bakiri (1995). They used C4.5 (Quinlan, 1993),
the predecessor of C5.0, to generate a two-way classification tree for each class.
However, they found that the accuracy obtained was significantly inferior to the
direct application of C4.5 to the original multi-class problem-although they were
able to obtain better results by using an error-correcting output code instead of the
simple one-per-class code.
Smyth, Gray and Fayyad (1995) retrofitted a decision tree classifier with kernel
density estimators at the leaves in order to obtain better estimates of the class
probability functions. Although this did improve the accuracy of the class probability
estimates on three artificial datasets, the classification accuracies were not
significantly better. Moreover, the resulting structure is opaque because it includes
a kernel function for every training instance. Torgo (1997) also investigated fitting
trees with kernel estimators at the leaves, this time regression trees rather than
classification trees. These could be applied to classification problems in the same
manner as model trees, and have the advantage of being able to represent non-linear
class boundaries rather than the linear, oblique, class boundaries of model trees.
However, they suffer from the incomprehensibility of all models that employ kernel
estimators. An important difference between both Smyth et al. (1995) and Torgo
(1997), and the M5 model tree algorithm, is that M5 smooths between the models
USING MODEL TREES FOR CLASSIFICATION 11
at adjacent leaves of the model tree. This substantially improves the performance
of model trees in classification problems, as we saw.
Also closely related to our method are linear regression and other methods for
finding linear discriminants. On comparing our experimental results with those
obtained by ordinary linear regression, we find that although for many datasets
linear regression performs very well, in several other cases it gives disastrous results
because linear models are simply not appropriate.
5. Conclusions
This work has shown that when classification problems are transformed into problems
of function approximation in a standard way, they can be successfully solved
by constructing model trees to produce an approximation to the conditional class
probability function of each individual class. The classifiers so derived outperform
a state-of-the-art decision tree learner on problems with numeric and binary
attributes, and, more often than not, on problems with multivalued nominal attributes
too.
Although the resulting classifiers are less comprehensible than decision trees, they
are not as opaque as those produced by statistical kernel density approximators.
The expected time taken to build a model tree is log-linear in the number of instances
and cubic in the number of attributes. Thus model trees for each class can
be built efficiently if the dataset has a modest number of attributes.
Acknowledgments
The Waikato Machine Learning group, supported by the New Zealand Foundation
for Research, Science, and Technology, has provided a stimulating environment for
this research. We thank the anonymous referees for their helpful and constructive
comments. M. Zwitter and M. Soklic donated the lymphography and the primary-
tumor dataset.
Appendix
Treatment of missing values
We now explain how instances with missing values are treated in the version of
used for the results in this paper. During testing, whenever the decision tree
calls for a test on an attribute whose value is unknown, the instance is propagated
down both paths and the results are combined linearly in the standard way (as in
Quinlan, 1993). The problem is how to deal with missing values during training.
To tackle this problem, Breiman et al. (1984) describe a "surrogate split" method
in which, whenever a split on value v of attribute s is being considered and a
particular instance has a missing value, a different attribute s is used as a surrogate
to split on instead, at an appropriately chosen value v -that is, the test s ! v is
replaced by s ! v . The attribute s and value v are selected to maximize the
probability that the latter test has the same effect as the former.
For the work described in this paper, we have made two alterations to the pro-
cedure. The first is a simplification. Breiman's original procedure is as follows.
Let S be the set of training instances at the node whose values for the splitting
attribute s are known. Let L be that subset of S which the split s ! v assigns to
the left branch, and R be the corresponding subset for the right branch. Define
L and R in the same way for the surrogate split s ! v . Then the number of
instances in S that are correctly assigned to the left subnode by the surrogate split
is the corresponding number for the right
subnode. The probability that s ! v predicts s ! v correctly can be estimated
as chosen so that the surrogate split s ! v maximizes this
estimate. Whereas Breiman chooses the attribute s and value v to maximize this
estimate, our simplification is to always choose the surrogate attribute s to be the
class (but to continue to select the optimal value v as described). This stratagem
was reported in Wang and Witten (1997).
The second difference is to blur the sharp distinctions made by Breiman's pro-
cedure. According to the original procedure, a (training) instance whose value for
attribute s is missing is assigned to the left or right subnode according to whether
or not. This produces a sharp step-function discontinuity which is inappropriate
in cases when s ! v is a poor predictor of s ! v. Our modification,
which is employed by the version of M5 0 used in the present paper, is to soften the
decision by making it stochastic according to the probability curve illustrated in
Figure
A.1. The steepness of the transition is determined by the likelihood of the
test s ! v assigning an instance to the incorrect subnode, and this is assessed by
considering the training instances for which the value of attribute s is known.
First we estimate the probability p r that s ! v assigns an instance with a
missing value of s to the rightmost subnode; the probability of it being assigned to
the left node is just
. The probability that an instance is incorrectly assigned
to the left subnode by s ! v can be estimated as p il
likewise the
probability that it is correctly assigned to the right subnode is p cr
l
be the mean class value over the instances in L , and m r
the corresponding
value for R . We estimate p r by a model of the form
where x is the class value, and a, b are chosen to make the curve pass through
the points (m l ; p il
cr
as shown in Figure A.1. This has the desired
effect of approximating a sharp step function if s ! v is a good predictor of
which is when p il - 0 and p cr - 1, or when the decision is unimportant,
which is when m l - m r . However, if the prediction is unreliable-that is, when
p il is significantly greater than 0 or p cr is significantly less than 1-the decision is
softened, particularly if it is important-that is, when m l and m r differ appreciably.
During training, an instance is stochastically assigned to the right subnode with
. During testing, surrogate splitting cannot be used because the class
USING MODEL TREES FOR CLASSIFICATION 13
v0 class *
r
cr
Figure
A.1. How the soft step function model is fitted to the training data
value is, of course, unavailable. Instead an instance is propagated to both left
and right subnodes, and the resulting outcomes are combined linearly using the
weighting scheme described in Quinlan (1993): the left outcome is weighted by the
proportion of training instances assigned to the left subnode, and the right outcome
by the proportion assigned to the right subnode.
Notes
1. C5.0 is the successor of C4.5 (Quinlan, 1993). Although a commercial product, a test version
is available from http://www.rulequest.com.
2. See http://www.cs.waikato.ac.nz/-ml
3. For a realistic evaluation on standard datasets it is imperative that missing values are accom-
modated. If we removed instances with missing values, half the datasets in the lower part of
Table
would have too few instances to be usable.
4. Following Holte (1993), the G2 variant of the glass dataset has classes 1 and 3 combined and
classes 4 to 7 deleted, and the horse-colic dataset has attributes 3, 25, 26, 27, 28 deleted with
attribute 24 being used as the class. We also deleted all identifier attributes in the datasets.
--R
Classification and regression trees.
A probabilistic theory of pattern recognition.
"Solving multiclass learning problems via error-correcting output codes,"
"Very simple classification rules perform well on most commonly used datasets,"
UCI Repository of machine learning data-bases [http://www
"Learning with continuous classes,"
"Retrofitting Decision tree classifiers using Kernel Density Estimation,"
CA: Morgan Kaufmann.
"Kernel Regression Trees,"
"Induction of model trees for predicting continuous classes,"
--TR
C4.5: programs for machine learning
Very Simple Classification Rules Perform Well on Most Commonly Used Datasets
--CTR
Niels Landwehr , Mark Hall , Eibe Frank, Logistic Model Trees, Machine Learning, v.59 n.1-2, p.161-205, May 2005
Donato Malerba , Floriana Esposito , Michelangelo Ceci , Annalisa Appice, Top-Down Induction of Model Trees with Regression and Splitting Nodes, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.5, p.612-625, May 2004
Rudy Setiono, Feedforward Neural Network Construction Using Cross Validation, Neural Computation, v.13 n.12, p.2865-2877, December 2001
V. Zorkadis , D. A. Karras , M. Panayotou, Efficient information theoretic strategies for classifier combination, feature extraction and performance evaluation in improving false positives and false negatives for spam e-mail filtering, Neural Networks, v.18 n.5-6, p.799-807, June 2005
Ronny Kohavi , J. Ross Quinlan, Data mining tasks and methods: Classification: decision-tree discovery, Handbook of data mining and knowledge discovery, Oxford University Press, Inc., New York, NY, 2002
Duncan Potts, Incremental learning of linear model trees, Proceedings of the twenty-first international conference on Machine learning, p.84, July 04-08, 2004, Banff, Alberta, Canada
Duncan Potts , Claude Sammut, Incremental Learning of Linear Model Trees, Machine Learning, v.61 n.1-3, p.5-48, November 2005
Saso Deroski , Bernard enko, Is Combining Classifiers with Stacking Better than Selecting the Best One?, Machine Learning, v.54 n.3, p.255-273, March 2004
Eibe Frank , Leonard Trigg , Geoffrey Holmes , Ian H. Witten, Technical Note: Naive Bayes for Regression, Machine Learning, v.41 n.1, p.5-25, Oct. 2000
Joo Gama, Functional Trees, Machine Learning, v.55 n.3, p.219-250, June 2004
D. P. Solomatine , M. B. Siek, Modular learning models in forecasting natural phenomena, Neural Networks, v.19 n.2, p.215-224, March 2006
M. T. Musavi , H. Ressom , S. Srirangam , P. Natarajan , R. W. Virnstein , L. J. Morris , W. Tweedale, Neural network-based light attenuation model for monitoring seagrass population in the Indian river lagoon, Journal of Intelligent Information Systems, v.29 n.1, p.63-77, August 2007 | <math coding=latex type=inline>M5;<math coding=latex type=inline>C50;classification algorithms;decision trees;model trees |
325958 | Hybrid Gauss-Trapezoidal Quadrature Rules. | A new class of quadrature rules for the integration of both regular and singular functions is constructed and analyzed. For each rule the quadrature weights are positive and the class includes rules of arbitrarily high-order convergence. The quadratures result from alterations to the trapezoidal rule, in which a small number of nodes and weights at the ends of the integration interval are replaced. The new nodes and weights are determined so that the asymptotic expansion of the resulting rule, provided by a generalization of the Euler--Maclaurin summation formula, has a prescribed number of vanishing terms. The superior performance of the rules is demonstrated with numerical examples and application to several problems is discussed. | r
2.2. Euler{Maclaurin summation formula. For a function f 2 Cp(R), p
1, the Euler{Maclaurin summation formula (see, for example, [15, (23.1.30)]) can be
derived by repeated integration by parts. We rst consider the interval
apply (5) to obtain
Z c+h Z 1
where we have derive the Euler{Maclaurin formula for the interval [a; b], we let h =(ba)=n and
rearrange terms to obtain
f(a) f(b)
a
The expression on the left-hand side of (8) is the well-known trapezoidal rule. Evaluation
of the expression on the right-hand side of (8) is simplied by the fact that
2.3. Generalized Riemann zeta-function. The generalized Riemann zeta-
function is dened by the formula
(v
This function has a continuation that is analytic in the entire complex s-plane, with
the exception of where it has a simple pole. In what follows, we shall be
concerned primarily with real s and v, with s<1 and v>0. We will use the
HYBRID GAUSS-TRAPEZOIDAL QUADRATURE RULES 1555
following representation derived from Plana's summation formula (see, for example,
[16, section 1.10 (7)]):
v1s vs Z 1 sin(s arctan t) dt
Equation can be used to derive the asymptotic expansion of as v !1.We
treat the integral as a sum of Laplace integrals, each with an asymptotic expansion
given by Watson's lemma (see, for example, [17, p. 263]), and obtain
as v !1, with an arbitrary positive integer. Equation (10) isa slight generalization of [16, section 1.18 (9)]. There is a direct connection between
the Bernoulli polynomials and ,
and generalizations of the dierence and dierentiation formulae hold:
(v
@(s; v)
2.4. Orthogonal polynomials and Gaussian quadrature. Suppose that !
is a positive continuous function on the interval (a; b) and ! is integrable on [a; b].
We dene the inner product with respect to ! of real-valued functions f and g by the
integral
a
There exist polynomials p0;p1;:::, of degree 0; 1;:::, respectively, such that (pn;pm)=
are unique up to the choice of leading coecients.With leading coecients one, they can be obtained recursively by the formulae (see,
for example, [18, p. 143])
where p1(x)=0andn;n are dened by the formulae
The zeros xn1 ;::: ;xn of pn are distinct and lie in the interval (a; b). There exist
positive numbers !1n;::: ;!n such that
a
whenever f is a polynomial of degree less than 2n. These Christoel numbers are
given by the formula (see, for example, [19, p. 48])
Moreover, if !(x)=(b x) (x) with integrable on [a; b], then, with the denition
exist positive numbers 1n;::: ;n+1 such that
a
whenever f is a polynomial of degree less than or equal to 2n. These modied Christoffel
numbers are given by the formula> !in
a pn(b)
where !in is given by (16).
The summation in (15) is the n-node Gaussian quadrature with respect to !,
while that in (17) is an (n + 1)-node Gauss{Radau quadrature with respect to .
3. Hybrid Gauss-trapezoidal quadrature rules. In this section we introduce
new quadrature rules for regular integrands, singular integrands with a power
or logarithmic singularity, and improper integrals and determine their rate of convergence
as the number of quadrature nodes increases.
For notational convenience we generally consider quadratures on canonical inter-
vals, primarily [0; 1]. It is understood that these are readily transformed to quadratures
on any nite interval [a; b] by the appropriate linear transformation of the nodes
and weights.
3.1. Regular integrands. For j;n positive integers and a 2
0g, we dene a linear operator Tnja on C([0; 1]), depending on nodes x1;::: ;xj and
weights w1;::: ;wj, by the formula
where h =(n +2a 1)1 is chosen so that ah +(n 1)h =1 ah.
Theorem 3.1. Suppose f 2 Cp([0; 1]). The asymptotic expansion of Tnja(f) as
!1is given by the formula
HYBRID GAUSS-TRAPEZOIDAL QUADRATURE RULES 1557
Proof. We apply the Euler{Maclaurin formula (8) on the interval [ah; 1 ah]to
obtain
We now combine (19) and (21), the equality
Taylor expansion of all quantities about the Bernoulli polynomial expansion
formula (6), and dierence formula (4) to obtain (20).
Corollary 3.2. Suppose the nodes x1;::: ;xj and weights w1;::: ;wj satisfy
the equations
r +1
Then Tnja is a quadrature rule with convergence of order 2j +1 for f 2 Cp([0; 1]) with
2j. Moreover,
as n !1; provided and the remaining nodes x1;::: ;xj1 and
weights w1;::: ;wj satisfy the equations
2:
r +1
Then Tnja is a quadrature rule with convergence of order 2j for f 2 Cp([0; 1]) with
as n !1; provided f(2j1)(0) f(2j1)(1) =0.We shall see below that (22) has a solution with the nodes and weights all positive
if a is suciently large and that numerical solution of (22) is equivalent to computing
the roots of a particular polynomial. This statement holds for (24) as well.
3.2. Singular integrands. For j;k;n positive integers and a; b 2 R+, we dene
a linear operator Snjkab on C((0; 1]), depending on nodes v1;::: ;vj;x1;::: ;xk and
weights u1;::: ;uj;w1;::: ;wk, by the formula
chosen so that ah +(n 1)h =1 bh. The following
theorem, which follows from a generalization of the Euler{Maclaurin formula due to
Navot [8] or a further generalization due to Lyness [20], presents a somewhat dierent
proof than the earlier ones.
Theorem 3.4. Suppose g(x)=xf(x); where >1 and f 2 Cp([0; 1]). The
asymptotic expansion of Snjkab(g) as n !1is given by the formula
Proof.Forc 2 R+, we dene polynomials pc0;pc1;:::, in analogy with the Bernoulli
polynomials, by the formula
r
Dierentiating, we verify that
d
dx
and, combining the dierence formula (11) with (28), we obtain
c;n=0;
Additionally, we dene functions q0c;q1c;::: by the
and observe that
d
qnc
dx
HYBRID GAUSS-TRAPEZOIDAL QUADRATURE RULES 1559
With these denitions, the proof follows that of the Euler{Maclaurin formula:
Z 1bh nX2 Z 1
Taylor expansion of f(r)(ah) about the denitions (28) and (29) for pcn and qnc ,
and the binomial theorem combine to yield
Likewise, Taylor expansion of f(r)(1 bh) about the denitions for pcn and qnc ,
the asymptotic expansion (10) for , the Bernoulli polynomial expansion formula (6),
and the binomial theorem combine to yield
We now combine (26) and (30){(32), the equality
expansion of the latter two integrals about and the dierence formula (4) for
the Bernoulli polynomials and (11) for to obtain (27).
Corollary 3.5. Suppose the nodes u1;::: ;uj and weights v1;::: ;vj satisfy the
equations
and the nodes x1;::: ;xj and weights w1;::: ;wj satisfy the equations
r +1
Then Snjjab is a quadrature rule with convergence of order 2j +1+minf0;g for
where g(x)=xf(x); with f 2 Cp([0; 1]) and p 2j. Moreover,
as n !1:
Corollary 3.6. Suppose the nodes u1;::: ;uj and weights v1;::: ;vj satisfy the
equations
r +1
and the nodes x1;::: ;xk and weights w1;::: ;wk satisfy the equations
Xk r Br+1(b)
r +1
Then Snjkab is a quadrature rule with convergence of order minfj +1;+j +1; 2k+1g
for minfj;2kg.
In Corollaries 3.5 and 3.6, an even number of constraints on the nodes and weights
at both ends of the interval are considered. Clearly, there are analogous quadrature
rules arising from an odd number of constraints at one or both ends; these are similar,
and explicit presentation of them is omitted. We now consider a dierent type of
singularity.
Theorem 3.7. Suppose g(x)=f(x) log x; where f 2 Cp([0; 1]). The asymptotic
expansion of Snjkab(g) as n !1is given by the formula
HYBRID GAUSS-TRAPEZOIDAL QUADRATURE RULES 1561
where 0 denotes the derivative of with respect to its rst argument.
Proof. This asymptotic expansion is derived from that of Theorem 3.4 by dierentiating
(27) with respect to and evaluating the result at =0.
Corollary 3.8. Suppose the nodes u1;::: ;uj and weights v1;::: ;vj satisfy the
equations
ui vir log
r +1
and the nodes x1;::: ;xk and weights w1;::: ;wk satisfy the equations
Xk r Br+1(b)
r +1
Then Snjkab is a quadrature rule with error of order O(minfhj+1 log h; h2k+1g) for
where g(x)=(x) log x minfj;2kg.
3.3. Improper integrals. For j;n positive integers, we dene a linear operator
Rnj on C([n; 1)), depending on nodes x1;::: ;xj and weights w1;::: ;wj, by the
Theorem 3.9. Suppose g(x)=eixf(x); where 2 R; =0; and f 2 Cp([1; 1));and that there exist positive constants ; 0;::: ;p; such that
x+r
The asymptotic expansion of Rnj (g) as n !1is given by the formula
Proof. We integrate by parts repeatedly to obtain
and in (41) we compute the Taylor expansion of f about
lies between 0 and xk for k =1;::: ;j. Now combining (42), (44), and (45),
we obtain (43).
Example. The function f dened by the formula
ar
x+r
with jarj < 1, satises the assumptions of Theorem 3.9 for every positive integer
p. We remark that Theorem 3.9 can, in some instances, be generalized to
the corresponding asymptotic expansion depends on a more detailed knowledge of f.
For f given by (46), for example, the quadrature nodes and weights for
on .
Corollary 3.10. Suppose f 2 Cp([1; 1));fsatises (42) for x 2 R; and f is
analytic in the half-plane Re(x) >afor some a 2 R. Suppose further that v1;::: ;vj
are the roots of the Laguerre polynomial Lj of degree j, that coecients u1;::: ;uj
satisfy the equations
and that the operator Rnj is dened with nodes xk =(i=)vk and weights wk =(i=)uk
for k =1;::: ;j. Suppose nally that T^mja is dened to be the quadrature rule Tmja with
nodes and weights satisfying (22) but translated and scaled to the interval [1;n]. Then
for p 2j, the expression T^(jna1)n(g)+Rnj (g) is an approximation for the integral
with error of order O(n2j) as n !1.Proof. This result is just a combination of the quadrature rule of Corollary 3.2,
for the interval [1;n], with the asymptotic expansion of Theorem 3.9, for the interval
[n; 1), provided
But (48) follows from (47), the equations
(see, for example, [15, (6.1.1) and (22.2.13)]), and the fact that Gaussian quadratures,
which are exact for polynomials of degree less than twice the number of nodes, have
nodes that coincide with the roots of the corresponding orthogonal polynomials (see
section 2.4).
We have completed the denition of the new quadratures, along with the demonstration
of their asymptotic performance. We shall see that the existence of these
rules, which depends on the solvability of the nonlinear systems of equations that
dene the nodes and weights, is assured by the theory of Chebyshev systems. The
uniqueness of the rules is similarly assured. These issues of existence and uniqueness
are treated next.
HYBRID GAUSS-TRAPEZOIDAL QUADRATURE RULES 1563
4. Existence and uniqueness.
4.1. Chebyshev systems. Material of this subsection is taken, with minor
alterations, from Karlin and Studden [21]. Suppose I is an interval of R, possibly
innite. A collection of n real-valued continuous functions f1;::: ;fn dened on I is
a Chebyshev system if any linear combination
with ai not all zero, has at most n 1 zeros on I. This condition is equivalent to the
statement that for distinct x1;::: ;xn in I,
det B@ . CA =0:f1(xn) fn(xn)
The Chebyshev property is a characteristic of the space, rather than the basis: if
f1;::: ;fn is a Chebyshev system, then so is any other basis of spanff1;::: ;fng.If
u is a continuous, positive function on I, then scaling by u preserves a Chebyshev
system. Finally, if u is strictly increasing and continuous on interval J with range I,
then f1 u;::: ;fn u is a Chebyshev system on J if and only if f1;::: ;fn is on I.
(Here fi u denotes the composition u followed by fi.)
The best-known example of a Chebyshev system is the set of polynomials
on any interval I R. We shall be concerned also with the Chebyshev systems
log x; x; xlog x;::: ;x(n1)=2;x(n1)=2 log x
on I =(0;a], where 2 RnZ and a>0. These systems are special cases of the system
of Muntz functions (see, for example, [22, p. 133])
on I =(0; 1);Pwhere 1;::: ;j are distinct real numbers and n1;::: ;nj are positive
integers with To see this is a Chebyshev system, suppose f 2 span M and
use induction in n on (d=d log x)[f(x) xj ], in combination with Rolle's theorem.
Another Chebyshev system that will arise is the system
on I =[0; 1), where P1;::: ;j are distinct positive real numbers and n1;::: ;nj are
positive integers with n. This is indeed a Chebyshev system, for if f 2 spanL,
Qj
then the function f(x) i=1(x + i)ni is a polynomial in x of degree n 1.
Suppose f1;::: ;fn is a Chebyshev system on the interval I. The moment space
Mn with respect to f1;::: ;fn is the set
where the measure ranges over the set of nondecreasing right-continuous functions
of bounded variation on I. It can be shown that Mn is the convex cone associated
with points in the curve C, where
In other words, Mn can be represented as
The index I(c)ofapointc of Mn is the minimum number of points of C that can be
used in the representation of c, under the convention that a point (f1(x);::: ;fn(x))
is counted as a half point if x is from the boundary of I and receives a full count
otherwise. The index of a quadrature involving x1;::: ;xp is determined by counting
likewise.
Proofs of the next three theorems are somewhat elaborate and are omitted here;
they can be found in Karlin and Studden [21].
Theorem 4.1. (See [21, p. 42].) Suppose I =[a; b] is a closed interval. A point
is a boundary point of Mn if and only if I(c) <n=2: Moreover, ifis a measure corresponding to a boundary point c 2Mn, then there is a unique
quadrature
I
Theorem 4.2. (See [21, p. 47].) Suppose I =[a; b] is a closed interval. Any
point c in the interior of Mn satises I(c)=n=2. Moreover, if is a measure
corresponding to c; then there are exactly two quadratures
I
of index n=2; where wi > 0;i=1;::: ;p. In particular, if n =2m; then
Theorem 4.3. (See [21, p. 65].) Let Pn denote the nonnegative linear combinations
of functions f1;::: ;fn,
HYBRID GAUSS-TRAPEZOIDAL QUADRATURE RULES 1565
The point c =(c1;::: ;cn) is an element of Mn if and only if
implies ai ci 0:
Moreover, c is in the interior of Mn if and only if
imply ai ci > 0:
Theorem 4.4. (See [21, p. 106].) Suppose fi(x)=xi1 for i =1;::: ;n; and
I =[a; b].Ifn =2m, then c =(c1;::: ;cn) is an element of Mn if and only if the
two quadratic forms
Xm Xm
are nonnegative denite. If n =2m+1; then c 2Mn if and only if the two quadratic
forms
are nonnegative denite. Moreover, for either parity of n, c is in the interior of Mn
if and only if the corresponding quadratic forms are both positive denite.
Proof. A theorem of Lukacs (see, for example, [19, p. 4]) states that a polynomial
f of degree n 1 that is nonnegative on [a; b] can be represented in the form
where p and q are polynomials such that the degree of each term in (64) does not
exceed n 1. The combination of (64) and Theorem 4.3 proves the theorem.
4.2. Muntz system quadratures. The systems of (22), (24), (36), and (39)
that dene the quadrature rules of section 3 are special cases of the system of equations
dXn=2e
for distinct real numbers 1;::: ;j and positive integers n1;::: ;nj with
Here (k) denotes the kth derivative of with respect to its rst argument. The existence
and uniqueness of the solution of (65) follow from the existence and uniqueness
of quadratures for Chebyshev systems, once it is established that there is a measure
a with
Z a
in other words, that the moment space Mn of the Chebyshev system of Muntz function
on (0;a] contains the point
We will show that this condition is satised provided that a is suciently large. It
would be convenient to have tight bounds for a, in particular for systems (22), (24),
(36), and (39), but it appears that such bounds are dicult to obtain. Even for the
regular cases (22) and (24), where by Theorem 4.4 the existence of a is equivalent
to the positive deniteness of two matrices, precise bounds for arbitrary j appear
dicult. (Numerical examples below provide evidence that a=j may be chosen as
small as 5/6.)
Theorem 4.5. Suppose 1;::: ;j are diPstinct real numbers, each greater than
1, and n1;::: ;nj are positive integers with n. Then for suciently large
a, there exists a measure a such that the system of (66) is satised and c dened by
(68) is in the interior of the moment space Mn.
Proof. We construct a continuous weight function a0 satisfying (66) and show
that for suciently large a, a0 (x) is positive for x 2 [0;a].
We linearly combine the equations of (66) to obtain the equivalent system
where we have used the binomial theorem to expand logk(x=a) = (log xlog a)k.We
dene the weight a0 by the formula
and combine (69), (70), and the equalities
x log xdx= ;>1;k=0; 1;::: ;
to obtain the equations in 0;a;::: ;n1;a,
This n-dimensional linear system is nonsingular, since the set of functions
forms a Chebyshev system on [0; 1), as established at (51). Thus (71) possesses a
unique solution 0;a;::: ;n1;a.
HYBRID GAUSS-TRAPEZOIDAL QUADRATURE RULES 1567
We now determine
The asymptotic expansion of (r)(i;a)asa !1can be derived by dierentiating
(10); the rst several terms are given by
Combining (71) and (73), changing the order of summation, and twice applying the
product dierentiation rule
dr Xr r
dr s
we obtain
which immediately reduces to
The combination of (70), (72), and (74) gives
lim a0 (ax)=1;x2 [0; 1];
which implies that for a suciently large, a0 (x) > 0 for x 2 [0;a]. The point c
dened by (68) is in the interior of Mn, since small perturbations of c will preserve
the positivity of a0 .
Theorem 4.2 ensures the existence of Gaussian quadratures for a Chebyshev system
f1;::: ;fn dened on an interval I, under the assumption that I is closed, whereas
the system M of (67) is Chebyshev on I =(0;a]. As a consequence, we require the
following result.
Theorem 4.6. Suppose the collection of functions f1;::: ;fn forms a Chebyshev
system on I =(a; b] and each is integrable on [a; b] with respect to a measure corresponding
to a point c in the interior of Mn. Then there exists a unique quadrature
a
of index n=2; where wi > 0 and xi 2 I; i =1;::: ;p. In particular, if n =2m; then
Proof. The Chebyshev property implies that there exists with a<<bsuch
that fi is nonzero on (a; ], i =1;::: ;n. We dene the function u on I by the formula
and observe that u is continuous and positive on I and integrable on [a; b] with respect
to . Now we dene functions g1;::: ;gn on [a; b] by the formula< fi(x)
;x2 (a; b];
x!a
The system g1;:R::x;gn is a Chebyshev system on [a; b] and is integrable with respect
to the measure u(t) d(t). Theorem 4.2 therefore is applicable and ensures the
a
existence of exactly two quadratures of index n=2 for the interval [a; b], one of which
includes the point Our assumption xi 2 (a; b] excludes this case and we are
left with the single quadrature presented in (76) and (77).
The next theorem, which is the principal analytical result of this section, follows
directly from Theorems 4.5 and 4.6. The existence and uniqueness of the quadratures
dened in section 3 follow from it. It also hints at the existence of somewhat more
general quadratures, for singularities of the form x logk x, but we do not evaluate
these here.
Theorem 4.7. Suppose 1;::: ;j are diPstinct real numbers, each greater than
and n1;::: ;nj are positive integers with n. For suciently large a, the
system of equations
dXn=2e
has a unique solution w1;::: ;wdn=2e;x1;::: ;xdn=2e satisfying wi > 0 for
dn=2e and 0 <x1 < <xdn=2e a; with xdn=2e = a if n is odd.
5. Computation of the nodes and weights. The nodes and weights of the
quadratures dened in section 3 are computed by numerically solving the nonlinear
systems (22), (24), (36), and (39). Conventional techniques for this problem either are
overly cumbersome or converge too slowly to be practical. Recently, Ma, Rokhlin, and
Wandzura [14] addressed this need by developing a practical numerical algorithm that
is eective in a fairly general setting. They construct a simplied Newton's method
and combine it with a continuation (homotopy) method. We present their method in
an abbreviated form in section 5.2; the reader is referred to [14] for more detail.
The systems for regular integrands, however, can be solved even more simply, as
we see next.
5.1. Regular integrands. The classical theory of Gaussian quadratures for
polynomials, summarized in section 2.4, can be exploited to solve (22) and (24). In
particular, suppose that p0;::: ;pj are the orthogonal polynomials on [0;a], given by
HYBRID GAUSS-TRAPEZOIDAL QUADRATURE RULES 1569
the recurrence (13){(14), under the assumption that
Z a
r Br+1(a)
x
Then the roots x1;::: ;xj of pj and the corresponding Christoel numbers w1;::: ;wj
satisfy (22). The polynomials p1;::: ;pj can be calculated in symbolic form; their
coecients are rational if a is rational. The roots of pj can be computed by Newton
iteration and the Christoel numbers can be obtained using (16). Similar treatment
can be applied to the system (24) containing an odd number of equations, using the
interval [0;a 1], under the assumption
Z
r Br+1(a)
x 2:
The Gauss{Radau quadrature is computed using the formula (18) for the modied
Christoel numbers. For these tasks it is convenient to use a software system that
can manipulate polynomials with full-precision rational coecients. The author implemented
code for these computations in Pari/GP [23].
It should be noted that the proposed procedure is suitable for relatively small
values of j (less than, say, 20). It is neither very ecient nor very stable, but it
was quite adequate for our purposes. (Unlike the situation for standard Gaussian
quadratures, where the number of nodes depends on the size of the problem, here
depends only on the desired order of convergence.) If it is required to compute
the nodes and weights of (22) or (24) for large j, the reader may consider numerical
schemes for Gaussian quadrature proposed by other authors, for example, that of
Gautschi [24] or Golub and Welsch [25].
5.2. Singular integrands. The systems (36) and (39) for singular integrands
cannot be solved using methods for standard Gaussian quadrature, since the nodes
to be computed do not coincide with the roots of any closely related orthogonal
polynomials. We employ instead the algorithm for such systems developed by Ma,
Rokhlin, and Wandzura [14], which we now describe.
A collection of 2n real-valued continuously dierentiable functions f1;::: ;f2n
dened on an interval I =[a; b]isaHermite system if
for any choice of distinct x1;::: ;xn on I. A Hermite system that is also a Chebyshev
system is an extended Hermite system. The following theorem is a direct consequence
of the denition; the proofs of the subsequent two theorems are contained in [14].
Theorem 5.1. Suppose that the functions f1;::: ;f2n constitute a Hermite system
on the interval [a; b] and x1;::: ;xn are n distinct points on [a; b]. Then there
exist unique coecients ij;ij;i=1;::: ;n; j =1;::: ;2n; such that
for i =1;k=1;::: ;n; where the functions i;i are dened by the formulae
for i =1;::: ;n.
Theorem 5.2. (See [14, p. 979].) Suppose that the functions f1;::: ;f2n constitute
a Hermite system on [a; b]. Suppose further that S [a; b]n is the set of points
with distinct coordinates x1;::: ;xn. Suppose nally that the mapping F
dened by the formula
a a
with the functions 1;::: ;n dened by (80){(82). Then x1;::: ;xn are the Gaussian
nodes for the system of functions f1;::: ;f2n with respect to the weight ! if and only
if F(x1;::: ;xn)=0.
Theorem 5.3. (See [14, p. 983].) Suppose that the functions f1;::: ;f2n are
an extended Hermite system on [a; b];S [a; b]n is the set of points with distinct
coordinates, and the mapping G dened by the formula
a 1(x) !(x) dx a n(x) !(x) dx
a 1(x) !(x) dx a n(x) !(x) dx
with the functions 1;::: ;n and 1;::: ;n dened by (80){(83). Suppose further
that fi 2 C3((a; b)) for i =1;::: ;2n and the function F is dened by (84). Suppose
nally that x is the unique zero of F; that x0 is an arbitrary point of S; and that the
sequence x1; x2;::: is dened by the formula
Then there exists >0 and >0 such that the sequence x1; x2;::: generated by (86)
converges to x and
for any initial point x0 such that kx0 xk <.
The key feature of this theorem is the quadratic convergence indicated by (87).
The solution x is obtained by an iterative procedure; each step consists of computing
the coecients that determine 1;::: ;n and 1;::: ;n by inverting the matrix
of (79) then computing the integrals that dene G by taking linear combinations,
using these coecients, of integrals of f1;::: ;f2n. Theorem 5.3 ensures that with
appropriate choice of starting value x0, convergence is rapid and certain.
For quadrature nodes x =(x1;::: ;xn), the quadrature weights are given by the
integrals of i, namely,
a a
HYBRID GAUSS-TRAPEZOIDAL QUADRATURE RULES 1571
We note that Theorems 5.1{5.3 concern Gaussian quadratures with n nodes and
weights to integrate 2n functions on the interval [a; b] exactly. For Gauss{Radau
quadratures, in which node xn = b is xed and 2n1 functions are integrated exactly,
only a slight change is required. In particular, functions 1;::: ;n1 (without n)
and 1;::: ;n are dened as before by (80){(83), except that the summations in (82)
and (83) exclude f2n. Their coecients ij, i =1;::: ;n 1, j =1;::: ;2n 1, and
are obtained by inverting the matrix which results
from removing the last row and column from the matrix of (79). The revised mapping
components dened as the rst n 1 components in (85).
Finally, as before, the quadrature weights are given by (88).
In order to obtain a suciently good starting estimate for the solution of F(x)=0,
a continuation procedure can be used, as outlined in the following theorem.
Theorem 5.4. (See, for example, [14, p. 975].) Suppose that F :[0; 1]
Rn is a function with a unique solution xt to the equation F(t; x)=0
for all t 2 [0; 1]; suppose that xt is a continuous function of t; and suppose that
Finally, suppose x0 is given and for some >0 there is a procedure
P to compute xt for t 2 [0; 1]; given an estimate x~t with jx~t xtj <. Then there
exists a positive integer m such that the following procedure can be used to compute
the solution of F(x)=0:
For use P to compute xi=m; given the estimate x(i1)=m.
The required solution of F(x)=0 is x1.
More typically, of course, and any bound on jxt+ xtj= depend on t and in a
practical implementation the step size is chosen adaptively.
To compute the solutions of (36) and (39), it is eective to use a continuation
procedure with respect to both j and a. Solutions for the rst few values of j are
readily obtained without requiring good initial estimates. Given a solution of (36) for
the interval [0;a] with nodes u1;::: ;uj and weights v1;::: ;vj, we choose an initial
estimate u~1;::: ;u~j+1,v~1;::: ;v~j+1 for and the interval [0;a+ 1] dened by the
ui;i=1;::: ;j; vi;i=1;::: ;j;
This choice exactly satises the equations
r +1
as follows immediately from the dierence formula (4) for Bn and (11) for , but the
corresponding equations for are not satised. Those equations are approximately
satised, however, and we can start with the actual values of the sums for
as the required values. These are then varied continuously, obtaining the corresponding
solutions, until they coincide with the intended values ( j;a +1)
and Bj+1(a +1)=(j + 1). This procedure can be used without alteration for (39).
Once the solution for and the interval [0;a + 1] is obtained, a can be
continuously varied, in a continuation procedure, to obtain solutions for dierent
intervals.
1572 BRADLEY K. ALPERT
Table
The minimum value of a, as a function of j, such that the point B1(a)=1;::: ;Bj(a)=j is in
the moment space M2j of the polynomials 1;x;::: ;x2j1 on the interval [0;a]. The moment space
is dened at (52).
j min a j min a j min a j min a
6 4.77448
9 7.21081
14 11.29815
6. Numerical examples. The procedures described in section 5 were implemented
in Pari/GP [23] for both the regular cases and the singular cases. The matrix
in (79), which must be inverted, is very poorly conditioned for many choices of n,
x1;::: ;xn, and f1;::: ;f2n. This diculty was met by using the extended precision
capability of Pari/GP.
6.1. Nodes and weights. The nodes and weights of (22), (24), (36), and (39)
that determine the quadratures of section 3 were computed for a range of values of the
parameter j. For each choice of j, a was chosen, by experiment, to be the smallest integer
leading to positive nodes and weights (see Theorem 4.7). For the regular case (22),
the characterization expressed in Theorem 4.4 was used to determine the minimum
value of a 2 R, for j =1;::: ;16, that satises
In particular, we obtained the minimum value of a such that the quadratic forms in
are nonnegative denite. This determination was made by calculating the determinant
of each corresponding matrix symbolically and solving for the largest root
of the resulting polynomial in a. These values are given in Table 1. This evidence
suggests that limj!1 j1 min a exists and is roughly 5=6, meaning that the number
of trapezoidal nodes displaced is less than the number of Gaussian nodes replacing
them, for quadrature rules of all orders. This relationship also appears to hold, to an
even greater extent, for the singular cases.
The values of selected nodes and weights, for the regular cases and for singularities
x1=2 and log x, are presented in an appendix. Of particular simplicity are the rst
two rules for regular integrands,
48 48
48 48
These rules are of third- and fourth-order
convergence, respectively. The rst is noteworthy for having the same weights as, but
higher order than, the trapezoidal rule; the second has asymptotic error 1=4 that of
Simpson's rule with the same number of nodes.
The lowest-order rule presented for logarithmic singularities,
HYBRID GAUSS-TRAPEZOIDAL QUADRATURE RULES 1573
Table
Relative errors in the computation of the integral in (94), for the regular case s(x)=0. Quadrature
rules with convergence of order 2; 4; 8; 16, and 32 were used with various numbers
of nodes. Here is the oversampling factor.
90 1.41
100 1.57
200 3.14
260 4.08
Table
Relative errors for the singular case s(x)=x1=2, for various numbers of nodes
and orders of convergence.
90 1.41
100 1.57
200 3.14
260 4.08
approximates g(x) dx with error of order O(h2 log h) for g(x)=(x) log x are regular functions on [0; 1]. The corresponding rule for the
singularity x1=2 is not quite as simple, for there the quantity 2 in (93) is replaced
with 4(1=2)2.
6.2. Quadrature performance. To demonstrate the performance of the quadrature
rules, they were used in a Fortran routine (with real*8 arithmetic) to numerically
compute the integrals
dx;for the functions s(x)=0,s(x)=x1=2, and log x. These integrals were also
obtained analytically and the relative error of the quadratures was computed. The
numerical integrations were computed for various orders of quadrature and various
numbers of nodes. Minimum sampling was taken to be two points per period of the
cosine (i.e., 200= 63:7 quadrature nodes). The accuracies were then compared for
various degrees of oversampling. The quadrature errors are listed in Tables 2{4 and
plotted, as a function of oversampling factor, in Figure 1. We note that the graphs are
Table
Relative errors for the singular case log x. Here the error is of order O(hl log h), where
l is shown.
90 1.41
100 1.57
200 3.14
260 4.08
-2-4-6Relative
Oversampling (f) Oversampling (f) Oversampling (f)
Fig. 1. The relative errors of the quadratures, shown in Tables 2{4, are plotted using logarithmic
scaling on both axes.
nearly straight lines (until the limit of machine precision is reached), as predicted from
the theoretical convergence rates. We remark also that excellent accuracy is attained
for even quite modest oversampling when quadratures with high-order convergence
are employed. For problems where the number of quadrature nodes is the major cost
therefore, one may benet by using the high-order quadratures even for modest
accuracy requirements.
HYBRID GAUSS-TRAPEZOIDAL QUADRATURE RULES 1575
Table
Relative errors in the computation of the integral in (95), for =1. Quadrature rules dened
in Corollary 3:10 with j =1; 2; 4; 8, and 16 were used with various numbers m of nodes.
We test the quadratures for improper integrals by numerically computing for
1 the integral
where H is the Heaviside step function. The integrand is oscillatory and decays
like x1 as x !1. The quadratures dened in Corollary 3.10 are employed, for
which the integral is split into a regular integral on a nite interval, chosen here
to be [5 m=4; 5 m=4], where m is the total number of quadrature nodes, and
two improper integrals in the imaginary direction, using Laguerre quadratures. The
quadrature errors are shown in Table 5.
7. Applications and summary. The chief motivation for the hybrid Gauss-
trapezoidal quadrature rules is the accurate computation of integral operators. We
dene an integral operator A by the formula
Z
where is a regular, simple closed curve in the complex plane, the function f is
regular on , and the kernel regular function of its arguments,
except where they coincide; we assume
with and ^ regular on and s regular on (0; 1), with an integrable singularity
at 0. A large variety of problems of classical physics can be formulated as integral
equations that involve such operators. When the operator occurs in an integral equa-
tion
some choice of discretization must be used to reduce the problem to a nite-dimensional
one for numerical solution. In the Nystrom method the integrals are replaced by
quadratures to yield the nite system of equations
Xm
1576 BRADLEY K. ALPERT
This linear system can be solved for f(x1);::: ;f(xm) by a variety of techniques. The
particular choice of xi and wij for ;mdetermines the order of convergence
(and therefore eciency) of the method.
For a curve parametrization :[0; , such as scaled arc length, the operator
A becomes
is convenient to use a uniform discretization 1=m; 2=m;::: ;1int and
How then is wij determined? We assume for the moment
that f is available at locations other than x1;::: ;xm. Continuing periodically with
period 1, and using the Gauss-trapezoidal quadratures, we obtain
Z 1+i=m
for dened by the formula
are determined for the singularity s of
K. Provided that the periodic continuation of is suciently regular, the quadrature
will converge to the integral with order greater than j as m !1, for i =1;::: ;m.
We relax the restriction that f be available outside x1;::: ;xm by using local Lagrange
interpolation of order equispaced nodes,
(j 1)=2c and
r s
s=0;s=rNow wij for determined by combining (97){(101). The computation
of all m2 coecients requires m(m +2j 2a evaluations of the kernel K and
therefore order O(m2) operations. This cost can often be substantially reduced using
techniques that exploit kernel smoothness (see, for example, [6], [5]).
A slightly dierent application of the quadratures is the computation of Fourier
transforms of functions that fail to satisfy the assumptions usually made when using
the discrete Fourier transform. In particular, if a function decays slowly for large
argument or is compactly supported and singular at the ends of the support interval,
these quadratures can be used to compute its Fourier transform. One example of such
a function is that in (95). Since most of the nodes in these quadratures are equispaced,
with function values given equal weight, the fast Fourier transform can be used to
do the bulk of the computations; the overall complexity is O(n log n), where n is the
number of Fourier coecients to be computed. Other applications may include the
HYBRID GAUSS-TRAPEZOIDAL QUADRATURE RULES 1577
representation of functions for solving ordinary or partial dierential equations, when
high-order methods are required. In addition, an extension of these quadratures to
integrals on surfaces is under study.
In summary, the characteristics of the hybrid Gauss-trapezoidal quadrature rules
include
arbitrary order convergence for regular functions or functions with known
singularities of power or logarithmic type,
positive weights,
most nodes equispaced and most weights constant, and
invariant nodes and weights (aside from scaling) as the problem size increases.
The primary disadvantage of the quadrature rules, shared with other Gaussian quadratures
but exacerbated here by poor conditioning, is that the computation of the nodes
and weights is not trivial. Nevertheless, tabulation of nodes and weights for a given
order of convergence allows this issue to be avoided in the construction of high-order,
general-purpose quadrature routines.
Appendix
. Tables of quadrature nodes and weights.
Quadrature nodes and weights may also be obtained electronically from the author
Table
ThePnodes and weights for the quadrature rule
several choices of j and corresponding
minimum integer a.Forf a regular function, converges to f(x) dx as n !1withconvergence of order O.
Oa xi wi
HYBRID GAUSS-TRAPEZOIDAL QUADRATURE RULES 1579
Table
(Continued)
Oa xi wi
3.14968 50162 29433D01
1.39668 57813 42510D+00
2.17519 59032 06602D+00
3.06232 05758 80355D+00
Table
PThe nodes v1;:P:: ;vj and weights Pu1;::: ;uj for the quadrature rule Sn (g)=
g(x)=x1=2(x)+^(x), with and ^ regular functions. The nodes x1;::: ;xk and weights
w1;::: ;wk are found in Table 6.
Oa vi ui
1.00000 00000 00000D00 1.08019 20374 73384D+00
2.5 2 6.02387 37964 08450D02 2.85843 99904 20468D01
2.24632 55125 21893D01 4.87348 40566 46474D01
1.00000 00000 00000D+00 9.73575 20666 00344D01
2.69428 63467 92474D01 5.07743 45780 43636D01
2.57822 04347 38662D01 4.95598 17403 06228D01
2.00000 00000 00000D+00 1.00241 65465 50407D+00
8.28301 97052 96352D02 1.75524 44045 44475D01
4.13609 49257 26231D01 5.03935 05038 58001D01
1.08874 43736 88402D+00 8.26624 13396 80867D01
2.00648 21018 52379D+00 9.77306 58489 81277D01
3.00000 00000 00000D+00 9.99791 98099 47032D01
3.22395 27000 27058D02
1.79093 53836 49920D01
5.43766 38052 44631D01
1.17611 66283 96759D+00
2.03184 82107 16014D+00
4.00000 00000 00000D+00
6.19984 48842 97793D03
7.10628 67917 20044D02
2.40893 01044 10471D01
7.59244 65404 41226D01
9.32244 63996 14420D01
9.92817 14381 60095D01
9.99944 91256 89846D01
10.
HYBRID GAUSS-TRAPEZOIDAL QUADRATURE RULES 1581
Table
(Continued)
Oa vi ui
1.54042 43511 15548D02
8.83424 84071 96555D02
2.82446 20545 09770D01
6.57486 98923 05580D01
1.24654 10609 77993D+00
2.03921 84951 30811D+00
2.97933 34870 49800D+00
4.99724 08043 11428D+00
5.99986 87939 51190D+00
7.00000 00000 00000D+00
2.92101 89269 12141D03
3.43113 06112 56885D02
1.22466 94956 38615D01
2.76110 82420 22520D01
6.96655 56772 71379D01
8.79007 79419 72658D01
9.86862 24492 94327D01
1.00000 23977 96838D+00
14.0 9 3.41982 14602 49725D04
9.29659 34301 87960D03
4.21848 66056 53738D01
2.16099 75052 38153D+00
4.99973 27079 05968D+00
5.99987 51919 71098D+00
8.00000 00000 00000D+00
1.75095 72432 02047D03
2.08072 65842 87380D02
7.58683 06164 33430D02
3.20662 43620 72232D01
8.24495 90253 66557D01
9.84576 84431 63154D01
9.99285 27691 54770D01
1.00000 00814 05180D+00
5.89843 27437 09196D03
1.14558 64950 70213D01
2.79034 42188 56415D01
5.60011 37986 53321D01
9.81409 12428 83119D01
2.27017 91140 36658D+00
3.10823 46017 15371D+00
4.03293 08939 96553D+00
Table
PThe nodes v1;:P:: ;vj and weights Pu1;::: ;uj for the quadrature rule Sn (g)=
functions. The error is of order O(hl log h). The
nodes x1;::: ;xk and weights w1;::: ;wk are found in Table 6.
la vi ui
HYBRID GAUSS-TRAPEZOIDAL QUADRATURE RULES 1583
Table
(Continued)
la vi ui
2.44111 00950 09738D02
1.15385 12974 29517D01
7.32974 05318 07683D01
2.11435 87523 25948D+00
3.02608 45496 55318D+00
5.00014 11700 55870D+00
6.00000 10024 41859D+00
6.36419 07807 20557D03
4.72396 41432 87529D02
1584 BRADLEY K. ALPERT
Acknowledgments
. The author thanks Gregory Beylkin, Vladimir Rokhlin,
and Ronald Wittmann for helpful discussions and encouragement.
--R
On the Numerical Solution of One-Dimensional Integral and Dierential Equa- tions
A Nystro
Generalized Gaussian quadrature rules for systems of arbitrary functions
Handbook of Mathematical Functions (10th print- ing)
Higher Transcendental Functions
Advanced Mathematical Methods for Scientists and Engi- neers
Introduction to Numerical Analysis
Tchebyche Systems with Applications in Analysis and Statistics
Polynomials and Polynomial Inequalities
User's Guide to PARI-GP
On the construction of Gaussian quadrature rules from modied moments
Calculation of Gaussian quadrature rules
--TR
--CTR
Roberto Camassa , Jingfang Huang , Long Lee, Integral and integrable algorithms for a nonlinear shallow-water wave equation, Journal of Computational Physics, v.216 August 2006
Thomas Hagstrom , George Hagstrom, Grid stabilization of high-order one-sided differencing I: First-order hyperbolic systems, Journal of Computational Physics, v.223 n.1, p.316-340, April, 2007
Donna Calhoun, A Cartesian grid method for solving the two-dimensional streamfunction-vorticity equations in irregular regions, Journal of Computational Physics, v.176 n.2, p.231-275, March 1, 2002
Sheon-Young Kang , Israel Koltracht , George Rawitscher, Nystrm-Clenshaw-Curtis quadrature for integral equations with discontinuous kernels, Mathematics of Computation, v.72 n.242, p.729-756, 1 April
Bradley Alpert , Leslie Greengard , Thomas Hagstrom, Nonreflecting boundary conditions for the time-dependent wave equation, Journal of Computational Physics, v.180 n.1, p.270-296, July 20, 2002 | high-order convergence;numerical integration;positive weights;singularity;gaussian quadrature;euler-maclaurin formula |
325962 | Simulations of Acoustic Wave Phenomena Using High-Order Finite Difference Approximations. | Numerical studies of hyperbolic initial boundary value problems (IBVP) in several space dimensions have been performed using high-order finite difference approximations. It is shown that for wave propagation problems, where the wavelengths are small compared to the domain and long time integrations are needed, high-order schemes are superior to low-order ones. In fact, in two dimensions an acoustic lens is simulated, leading to large scale computations where high-order methods and powerful parallel computers are necessary if an accurate solution is to be obtained. | Introduction
In this paper we present results of numerical simulations using high-order finite difference
methods. In particular we are interested in hyperbolic wave propagation
problems where accurate long time integrations are needed. For such problems
high accuracy in both time and space are needed if an accurate solution is to be
obtained. The efficiency of high-order methods has been studied by Kreiss and
Oliger in [4] and by Swartz and Wendroff in [12]. These studies are done on a
scalar hyperbolic model problem with periodic boundary conditions. In [7] and
numerical experiments on hyperbolic IBVP show the efficiency of high-order
methods compared with low-order ones. In this paper we will use two different
types of high-order finite difference approximations for hyperbolic IBVP. The first
method uses difference operators that satisfy a summation by parts rule. The numerical
boundary conditions are built in into the operator. To treat the analytic
boundary conditions a projection operator is used [8], [9]. For the second method
numerical boundary conditions are obtained by extrapolation of the outgoing characteristic
variables and by using the analytic boundary conditions and the PDE
for the ingoing characteristic variables [3], [11].
In section 2 and 3 these difference methods are used to solve hyperbolic IBVP.
In section 2 the efficiency of high-order approximations is compared to second-order
approximations on a scalar model equation. The number of points per
wavelength and points per period needed to obtain a certain accuracy for long
Department of Scientific Computing, Uppsala University, Uppsala, Sweden
time integrations is studied. These results are then compared with corresponding
results by Kreiss and Oliger [4] and Swartz and Wendroff [12] where periodic
boundary conditions are used.
In section 3 we simulate plane waves that are refracted when going through a
convex lens. The refracted waves then intersects at a point called the focus. This
can be simulated by solving the two-dimensional wave equation in a channel with
a plane wave entering the domain. The convex lens is modeled with a variable
velocity of sound in the channel. To be able to distinguish the focus it is important
to have high accuracy in both space and time to make errors from dispersion small.
1.1 Summation by Parts and Projections
We discretize by dividing the x-axis into intervals of length h, and use for
the notation
d is a grid vector function. The basic idea is to approximate @=@x
with a discrete operator Q, satisfying a summation by parts rule
where
N ); are grid vectors and
In reality Q is a banded matrix with centered difference stencils in the interior and
one-sided difference stencils near the boundaries. The scalar product and norm is
defined as
where h is the grid spacing and H is the norm matrix having the form
I
The existence of such operators has been proved by Kreiss and Scherer [6], [5]. In
[10] Strand has used their theory to construct such operators. These operators
are distinguished with respect to the form of the norms for the summation by
parts rule. Here we will use two different kinds of norms, diagonal norms and
restricted full norms. The corresponding difference operators are called diagonal
norm difference operators and restricted full norm difference operators. For the
diagonal norm the matrices H (1) ; H (2) are diagonal and for the restricted full norm
they have the form
are scalars.
To define difference operators in two space dimensions we denote by
functions and by Q 1 and Q 2 the difference operators
approximating @=@x 1 and @=@x 2 . The difference operators are written as
are the grid spacings in the x 1 - and x 2 -direction, and the q's are
defined such that summation by parts and accuracy conditions holds. In [8] it is
showed that the difference operators defined by (2) satisfy a summation by parts
rule in two dimensions if the summation by parts norm is a diagonal norm.
In [8], [9] Olsson has proved stability for high-order approximations of hyperbolic
and parabolic systems by using such difference operators. To treat the
analytic boundary conditions in a correct way they are represented as an orthogonal
projection. In several space dimensions stability can only be proved if the
difference operators satisfy a summation by parts rule in a diagonal norm. For
the restricted full norms such results can not be proved. However, in section 3
numerical computations of the two-dimensional wave equation show that for this
problem the restricted full norm difference operator results in a stable scheme.
In the following we will refer to this type of method as the summation by parts
method.
1.2 Strongly Stable Approximations
For the second type of method used here, the extra numerical boundary conditions
needed to close the high-order finite difference approximations near the
boundaries, are obtained by extrapolation of the outgoing characteristic variables,
and by differentiating the analytic boundary conditions and using the PDE for the
ingoing characteristic variables. This technique was first proposed by Gustafsson,
Kreiss and Oliger [3] who proved strong stability for a fourth-order approximation
of systems of hyperbolic PDEs in one space dimension. This scheme was later
generalized to general order of accuracy 2r by Strand [11]. Strong stability means
that an estimate of the solution at any given time is obtained in terms of the
forcing function, initial data and boundary data. In several space dimensions this
technique is applicable although no stability results have been proved. We will
refer to this type of method as the strongly stable method.
2 Efficiency of High-Order Methods
In this section we will compare the efficiency of high-order finite difference approximations
with second-order ones when long time integrations are needed. This is
done by computing the errors of the numerical approximations using the methods
mentioned in section 1 when applied to a scalar hyperbolic IBVP. It is then shown
that with non-periodic boundary conditions, there is no restriction on the number
of points per period and points per wavelength needed to obtain a certain error,
compared with the case of periodic boundary conditions. In the periodic case we
refer to classical results by Kreiss and Oliger [4], for semi-discrete approximations,
and Swartz and Wendroff [12] for fully discrete approximations.
2.1 Periodic Case
Here we present some of the results in [4] and [12] regarding the periodic problem
We approximately solve the problem by
dt
where
r
is a centered difference operator. The solution to this system is
!h
and has the phase error
Let P denote the number of periods we want to compute in time , the
wavelength and -
h the number of points per wavelength. Then
c and we
define
c
as the phase error per period.
For the fully discrete approximation we proceed as in [12] except that we here
use the standard fourth-order Runge-Kutta method in time. The system (6) can
be written as
dt
j be the approximate solution of the differential equation (3-5) obtained by
using the fourth-order Runge-Kutta method on (8). That is,
where \Deltat is the time step. Then the solution at
where
Denote by
!c\Deltat the number of time intervals per period. With
the exact solution to (3), we have
where we used the stability of the Runge-Kutta method, jRj - 1, in the third step.
By using
Thus from (10) we conclude
As in [12] an error function, valid for small errors, is now defined as
Table
1-3 show the error e defined by (12) for second-, fourth-, and sixth-order
approximation in space combined with the fourth-order Runge-Kutta method in
time. The error is shown for various number of points per wavelength, -, and CFL
h . The number of points per period can then be expressed as
-=CFL. Furthermore we note that the approximation (9) is stable for CFL -
2.85, 2.06, 1.79 for second-, fourth-, and sixth-order approximation in space.
Here we are interested in very long time integrations and table 1-3 show the
theoretical estimates on the error for up to 900 periods. These estimates will be
used later to compare with numerical results when non-periodic boundary conditions
are used.
Table
1: e, for second-order centered difference approximation of u t +cu
Table
2: e, for fourth-order centered difference approximation of u t +cu
Table
3: e, for sixth-order centered difference approximation of u t +cu
2.2 Non-Periodic Boundary Conditions
Consider the problem
where
is the wavelength and 10-. The solution, given
by u(x; is a wave packet, containing 10 wavelengths, traveling with
speed one along the x-axis.
We now want to solve (13) numerically by using the methods presented in
section 1. We will here use the method of lines approach by first defining semi-discrete
approximations and then using the standard fourth-order Runge-Kutta
method for integration in time. We discretize as in the periodic case and turn to
the semi-discrete approximations of (13).
2.2.1 Summation by Parts Method
Denote by vector and use the following discrete form of
the analytic boundary condition
To treat the analytic boundary condition in a correct way we will use a method
by Olsson [8], [9] where the boundary condition is represented as an orthogonal
projection with respect to the scalar product
(\Delta; \Delta) h . In [8] it is shown that the projection becomes
where I is the identity matrix and H the summation by parts norm. A semi-discrete
approximation is then defined as
~
where P is the projection operator and Q a difference operator that satisfy a
summation by parts rule in the scalar product (\Delta; \Delta) h . In [8] it is proved that if
the initial condition satisfies the boundary condition this approximation is stable.
This is obviously true for our problem and we have a stable scheme.
2.2.2 A Strongly Stable Method
For the second method mentioned in section 1 we add extra grid points, x
r. The differential equation (13)
is approximated for by a centered finite difference scheme of order
dt
where Q is the difference operator defined by (7). In [11] it is proved that with
the numerical boundary conditions
at the inflow boundary and
at the outflow boundary, the approximation (19)-(21) is strongly stable and the
error of the solution is of order h 2r . By using these numerical boundary conditions
to modify the difference operator close to the boundary we can write the
approximation (19)-(21) as an ODE system
where ~
Q is the (N \Theta N)-matrix obtained from the difference
operator Q and the boundary conditions (20), (21), and F is a vector containing
g(t) and its derivatives.
2.2.3 Numerical Results
Here we have used the fourth-order Runge-Kutta method to integrate the semi-discrete
systems (18) and (22) in time for 450 and 900 periods. This is done for
different number of points per wavelength, -, and different CFL numbers. We
assume that is the number of grid points in the interval
is the wavelength. For the summation by parts method we have
used a diagonal norm difference operator of order three at the boundary and six
in the interior, and a restricted full norm operator of order three at the boundary
and four in the interior. We will refer to these operators as the fourth-order
diagonal norm operator (D4) and the fourth-order restricted full norm operator
(RF4). Here the order of the difference operator refers to the order of the global
accuracy that the theory by Gustafsson [1], [2] predicts. There it is proved that
boundary conditions of at least order must be imposed to retain pth-order
global accuracy. For the strongly stable method we have used a second-, fourth-,
and sixth-order scheme in space.
Let u denote the exact solution and u h the approximate solution and define by
the error at a given point. Thus, e 1 corresponds to the theoretical
estimate of the error in the periodic case, e.
Table
4-6 displays the error e 1 when 450 and 900 periods in time have been
computed and figure 1-3 shows approximate and exact solution for the strongly
stable schemes. Comparing with the periodic case, table 1-3, we see that e in
general is larger then the computed error e 1 . An explanation for this is that we
maybe made a too rough estimate in the third step in (10) where we replaced
jR(i\Deltatfi)j with one. For large \Deltatfi we will have jR(i\Deltatfi)j ! 1 and the error will
be less than what is predicted by the estimate. Furthermore, the error function
(12) is only valid for small phase errors e f . When we increase the number of grid
points per wavelength the agreement is better and (12) can be used to predict the
number of grid points per wavelength needed, even when the boundary conditions
are non-periodic.
To compare the efficiency of the schemes we define as in [12] the work per
period per wavelength to obtain a certain error by
where d is the number of operations per mesh point per time step. d, of course
depends on the spatial accuracy. In [12] the optimal values of - and M were
obtained by minimizing w for a fixed error. Here we will use table 4-6 to obtain an
optimal combination (-; CFL;w) for a given error. These tables do not in any way
cover all combinations (-; CFL) and we will not claim that we found the optimal
combination over all (-; CFL).
As table 4 shows the errors for the second-order scheme are large even for
140 points per wavelength. If we compute for 450 periods with the strongly stable
schemes and want to have e 1 - 0:85, optimal schemes are for second-order
(140,2,245000), for fourth-order (20,1,13200), and for sixth-order (10,0.5,8200).
Thus, the fourth-order scheme are about times more efficient than the second-order
0one, and the sixth-order scheme about times more efficient. It should
be pointed out that this is a very large error and yet the high-order schemes are
much more efficient than the second-order one. If we compute for 900 periods
the errors for the second-order scheme are larger than one even for 140 points per
wavelength. If we compare the high-order schemes for an error e 1 - 0:65, optimal
schemes are for fourth-order (25,1,20625) and for sixth-order (20,1,16400), i.e. the
sixth-order scheme is 1.2 times more efficient than the fourth-order scheme. Table
5 clearly shows that the efficiency of the sixth-order scheme compared with the
fourth-order scheme increases with decreasing error. Here we have only considered
the strongly stable schemes but the schemes satisfying a summation by parts rule
gives the same errors for the same number of points per wavelength and points
per period.
Table
4: for the second-order strongly stable scheme for CFL=1.0 and
CFL=2.0.
Table
5: strongly stable schemes for CFL=0.5 and
CFL=1.0.
fourth-order sixth-order
Table
summation by parts schemes for CFL=0.5 and
CFL=1.0.
fourth-order fourth-order
diagonal restricted full
-o- approx
exact
-o- approx
exact
-o- approx
exact
-o- approx
exact
Figure
1: Approximate solution after 450 periods obtained with the second-order
-o- approx
exact
-o- approx
exact
-o- approx
exact
-o- approx
exact
Figure
2: Approximate solution after 450 periods obtained with the fourth-order
strongly stable scheme.
-o- approx
exact
-o- approx
exact
-o- approx
exact
-o- approx
exact
Figure
3: Approximate solution after 450 periods obtained with the sixth-order
3 The Acoustic Lens Problem
As an application where high-order accurate approximations are needed we will
simulate plane acoustic waves that are refracted to a focus when they are traveling
through a convex lens. This can be simulated by solving the two-dimensional wave
equation in a channel
where have the components pressure and velocity in the x 1 - and
-direction and
The velocity of sound c(x 1 defined by
simulates a convex lens. At x we have the boundary
cu) is the local inflow component and
ae sin 2 ( -
a
\Gamma3 is the wavelength and a = 10P , where
the period. This boundary condition simulates a incoming plane wave traveling to
the right. At x set the inflow component to zero, i.e.2 (p \Gamma cu)(1; x
At
The problem is symmetric with respect to the x 1 -axis and for the numerical
computations we will use this symmetry and only compute the solution in the
domain 1=15. The symmetry conditions are given for
3.1 Numerical Methods
To solve the problem (23) numerically we first define semi-discrete approximations
by using the summation by parts method and the strongly stable method.
To integrate the semi-discrete approximations in time the standard fourth-order
Runge-Kutta method is used.
We discretize in space and leave time continuous by dividing the domain in
intervals of length h 1 and h 2 in the x 1 - and x 2 -directions. For
we use the notation
is a vector grid function.
3.1.1 Summation by Parts Method
Here we will use the difference operators defined in section 1.1. However, we modify
the difference operator approximating @=@x 2 , at grid points near x
we let the interior scheme and the symmetry conditions (28) define the difference
operator instead of the one sided difference stencils. To define a semi-discrete
approximation of (23) we proceed as in [8]. We express the analytic boundary
conditions as
@\Omega and
ae
and
Define a grid vector by u
The boundary conditions are discretized and written as
ij
with the non-zero element being the ith entry and ~
1. The boundary conditions can then be written as
~
~
In [8] it is shown that when the summation by parts norm is a diagonal norm the
projection operator (17) is independent of the norm, i.e.
A semi-discrete approximation is then defined as in [8]
where A 1 and A 2 here denotes the grid matrix representations of A fl
are the difference operators approximating @=@x 1 and
with
In [8] stability is proved for systems of the type (32) when the difference operator
satisfy a summation by parts rule with respect to a diagonal norm. Here we
will also use restricted full difference operators. In this case the operator does not
satisfy a summation by parts rule in two dimensions and a projection operator can
not be defined. However, in one dimension the projection operator is the same
for diagonal and restricted full norm difference operators and we therefore use
the projection operator defined by (31) also for the restricted full norm difference
operators.
3.1.2 A Strongly Stable Method
Here we need extra grid points outside the domain 0 - x 1 -
we will apply centered difference stencils of order 2r at the first interior points.
Therefore we add grid points, x 1 i
in the x 1 -direction and x 2 j
in the x 2 -direction.
We approximate (23) by a centered difference approximation of order 2 r
ij and
r
To derive extra boundary conditions for the inflow components we regard the
velocity locally constant at the boundaries x
since the derivatives of c are small. At x need 2r conditions on p; u
and at x need 2r conditions on p; v. At x we note that
the boundary conditions (24), (26) do not depend on x 2 , therefore v(j; x
By differentiating the boundary
conditions with respect to time and using the differential equation we have for
@x k1
where g (k) (t) is the k-th derivative of g(t). At x the boundary condition
gives us v The differential equation,
Furthermore, by p x2
But, us
Integration of (36) with respect to t and the initial condition implies
Thus we have arrived at the condition
Repeating this procedure we get at x
@ 2k
We approximate (34) and (35) for even k's, thus for
where D+ fl
are the usual forward and backward differences in the the x
direction. At x
(D+2
For the locally outgoing characteristic variables we use extrapolation to derive
extra conditions. Thus for
Finally, at x we use the symmetry conditions (28), i.e. for
Summing up, to integrate the ODE system (33) in time we proceed as follows.
Given the solution, use (38),
(40) to compute p; u at the boundary and outside the domain at x
(39), (40) to compute p; v at the boundary and outside the domain at x
At the symmetry conditions (41) is used to compute p; v outside the
domain. This will give us p at all points except at the two corner points
, u at all points except at the boundary x
and v at all points except at the boundaries x 1. Thus, all points that need
to be specified for the integration of (33) are well defined.
3.2 Simulations on a Parallel Computer
The solution to our problem is a plane wave packet traveling essentially in the
-direction. The focus will occur at the end of the channel and the waves have to
be propagated through the whole channel. The sharpness of the focus will depend
on the number of wavelengths that can be contained in the x 2 -direction. When
the wavelength goes to zero the focus will be a point with infinite amplitude.
By numerical experiments we have found that at least 30 wavelengths in the x 2 -
direction are needed. In our case we have which means that the size
of the domain is 500 wavelengths in the x 1 -direction, and 33.3 wavelengths in the
-direction. Thus, to propagate the wave to the focus we have to compute up to
500 periods in time.
Since the wavelength is very small compared to the size of the domain and
the number of periods we have to compute is large, we need a powerful computer
with large memory to be able to solve this problem. For example, from the one-dimensional
problem we needed at least 10 points per wavelength for a fourth- or
sixth-order scheme when we computed 450 periods. In the x 2 -direction we do not
need that many points, since in that direction the solution is smoother. Thus,
if we chose 10 points per wavelength in the x 1 -direction and 3 points in the x 2 -
direction as a lower limit on the number of points needed, we will have 1:5
unknowns. Since h 1 - h 2 we define the CFL number as the quotient between
the time step and the space step in the x 1 -direction. Thus, for CFL= 0:5 we
have to do 10000 iterations in time. However, since the solution is a wave packet
traveling to the right we only compute those points in the x 1 -direction where the
solution is non-zero. Numerical results show that about 50-100 wavelengths in the
-direction have to be computed, giving about 0:15 unknowns
in each iteration. This is a lower limit on the number of unknowns and it grows
rapidly if we increase the number of points per wavelength.
The implementation was made on a 96 processors SP2 distributed memory
computer from IBM, at the Center for Parallel Computers at KTH, equipped with
66.7 MHz RS/6000 processors, giving a theoretical peak performance per processor
of 266 MFlop/s and 26 GFlop/s for the whole machine. The program was written
in Fortran 90 with MPI (Message Passing Interface) for communication between
the processors.
The computations are made in a moving window, figure 4, that follows the
wave through the domain. The size of the window is 100-200 wavelengths in the
-direction and the whole domain in the x 2 -direction. In the window only those
points in the x 1 -direction where the absolute value of the solution is larger than
are computed. As shown in figure 4 the window is divided into p strips
oriented in the x 1 -direction and processor q in the parallel computer is assigned
to
subdomain\Omega q . Since we use explicit difference operators in space and explicit
time integration on a uniform grid, the program is trivially parallel. The main
communication between the processors is to exchange p and v, needed to compute
-derivatives, at the interior boundary between two sub-domains.
In figure 5 contour plots of p 2 at different times are displayed, obtained by
integrating (32) in time with the standard fourth-order Runge-Kutta method and
the fourth-order diagonal norm difference operator in space, with
denotes the number of points per wavelength in the x fl -direction.
The wave packet is refracted and the focus occur after about 450 periods, i.e after
0.9 sec. To find the exact location of the focus this scheme and the sixth-order
strongly stable scheme were run with and the location of the
focus was defined as the time, t, where This happened at the
same time, schemes and the solutions, p 2 , is shown in figure 6.
Figure
7-9 show the focus, p 2 at different number of points per
wavelength, obtained with the summation by parts method and the strongly stable
method, combined with the fourth-order Runge-Kutta method in time. All
schemes work very well although stability results in two dimensions exists only
for the summation by parts method combined with a diagonal norm difference
operator. To compare these schemes an error, e, is defined as the number of
wavelengths that the center of the focus for the approximative solutions is behind
the center of the focus for the "exact" solutions obtained above. The center
of the focus is defined as follows. Let - x denote the point on the x 1 -axis where
occured. The center of the focus is then defined as the
on the x 1 -axis, where - x
x is defined as the largest
point where
x is defined as the
smallest point where
In table 7 the error, e,
and the consumed CPU time is compared for the high-order schemes. All computations
in table 7 were made on 10 processors. First of all one can note that
the consumed CPU time for the strongly stable schemes are lower than for the
summation by parts schemes. This is because of different implementations. A
notable observation is that for the same number of points per wavelength, the
sixth-order schemes often need less CPU time to reach the focus compared with
the fourth-order schemes. The explanation for this is that with few grid points per
wavelength the dispersion for sixth-order schemes are lower than for fourth-order
schemes and we do not need to compute as many wavelengths in the x 1 -direction
with the sixth-order schemes as with the fourth-order schemes. By comparing
the CPU time needed to obtain a certain error we see that for this problem the
sixth-order schemes are much more efficient than the fourth-order schemes. In this
application the boundaries have a relatively small influence on the solution, and
the scheme (D4) behaves as a sixth-order scheme, which table 7 shows. This is
because it is the sixth-order stencils in the interior that are significant. In figure
7 the focus is computed with a second-order scheme with
25. The CPU time needed to compute these solutions was in the
former case 5.1 hours on 20 processors, and in the latter case 10.8 hours on processors
.The corresponding errors of the focus was 2.36 and 0.91 wavelengths. By
comparing the latter solution with the strongly stable schemes, with
we see that although they have a smaller error and only 10 processors were
used, they require about 11 and 23 times less CPU time than the second-order
scheme needs on processors. For a smaller error the efficiency of the high-order
schemes compared to the second-order scheme should be much more pronounced.
In fact, for small errors of order 0.1, say, it is very doubtful if one can afford to
use second-order methods even on powerful parallel computers.
Conclusions
Numerical studies on hyperbolic IBVP in one and two space dimensions have been
performed using two different high-order finite difference methods. For the summation
by parts method a fourth-order diagonal norm difference operator, (D4),
and a fourth-order restricted full norm difference operator, (RF4), was used. For
the strongly stable method fourth- and sixth-order difference operators was used.
In one space dimension the number of points per wavelength and points per period
needed to obtain a certain accuracy was studied on a scalar hyperbolic IBVP. It
was shown that with non-periodic boundary conditions there is no restriction on
the number of points per wavelength and points per period needed to obtain a
certain error compared with the periodic case. In two space dimensions we simulated
plane waves that was refracted to a focus when going through a convex lens.
The problem was solved by an implementation, using Fortran 90 and MPI, on a 96
processors SP2 distributed memory computer. All schemes worked well although
stability results only exists for the summation by parts scheme combined with a
diagonal norm difference operator. The efficiency of high-order schemes compared
with second-order schemes was demonstrated by comparing the CPU time needed
to compute the focus. In this application the sixth-order schemes turned out to
be the most efficient ones. The reason for this is that for few grid points per
wavelengths it has lower dispersion than fourth-order schemes which means that
fewer wavelengths in the x 1 -direction need to be computed.
Table
7: The error, e, and consumed CPU hours, T cpu , for high-order schemes and
different computations were made on 10 processors.
D4 RF4 fourth-order sixth-order
strongly stable strongly stable
Acknowledgments
I would like to thank my adviser Professor Bertil Gustafsson for helpful support.
Also, I would like to thank Docent Leif Abrahamsson for many stimulating and
helpful discussions.
computational domain are between
broken lines
propagating
wave
computation in
moving window
-x 0x 20
e
computed points are between
s and x 0
e
Figure
4: The computations are done in a moving window. The window is divided
into p strips oriented in the x 1 -direction and the processor q in the parallel
computer is assigned the
subdomain\Omega q .
x
y
x
y
100 periods0.70.28 0.300.07
x
y
x
y
200 periods1.00.48 0.500.07
x
y
x
y
300 periods2.30.68 0.700.07
x
y
x
y
x
y
periods
Figure
5: Contour plots of p 2 at different times for the fourth-order diagonal norm
scheme with
6th strongly stable
y
D4 summation by parts
Figure
Focus obtained by solving the wave equation with the fourth-order
diagonal norm scheme and the sixth-order strongly stable scheme with
y
100x25 points/wavelength
y
Figure
7: Focus obtained by solving the wave equation with the second-order
diagonal norm scheme with
y
RF4, 10x3 points/wavelength
y
RF4, 20x5 points/wavelength
y
y
D4, 20x5 points/wavelength
Figure
8: Focus obtained by solving the wave equation with high-order summation
by parts schemes for different number of points per wavelength and
4th, 10x3 points/wavelength
y
4th, 20x5 points/wavelength
y
6th, 10x3 points/wavelength
y
6th, 20x5 points/wavelength
Figure
9: Focus obtained by solving the wave equation with high-order strongly
stable schemes for different number of points per wavelength and
--R
The convergence rate for difference approximations to mixed initial boundary value problems.
The convergence rate for difference approximations to general mixed initial boundary value problems.
Time Dependent Problems and Difference Methods.
Comparison of accurate methods for the integration of hyperbolic equations.
Finite element and finite difference methods for hyperbolic partial differential equations.
On the existence of energy estimates for difference approximations for hyperbolic systems.
Summation by parts
Summation by parts
Summation by parts for finite difference approximations for d/dx.
The relative efficiency of finite difference methods.
--TR
--CTR
Bernhard Mller , H. C. Yee, Entropy Splitting for High Order Numerical Simulation of Vortex Sound at Low Mach Numbers, Journal of Scientific Computing, v.17 n.1-4, p.181-190, December 2002 | hyperbolic initial boundary value problems;wave propagation;high-order finite difference;acoustics |
325976 | Orderings for Incomplete Factorization Preconditioning of Nonsymmetric Problems. | Numerical experiments are presented whereby the effect of reorderings on the convergence of preconditioned Krylov subspace methods for the solution of nonsymmetric linear systems is shown. The preconditioners used in this study are different variants of incomplete factorizations. It is shown that certain reorderings for direct methods, such as reverse Cuthill--McKee, can be very beneficial. The benefit can be seen in the reduction of the number of iterations and also in measuring the deviation of the preconditioned operator from the identity. | Introduction
. In this paper, we study experimentally how di#erent reorderings
a#ect the convergence of Krylov subspace methods for nonsymmetric systems of
linear equations when incomplete LU factorizations are used as preconditioners. In
other words, given a sparse linear system of equations are n-dimensional
vectors, we consider symmetric permutations of the matrix A, i.e., of the
then solve the equivalent system P T
way of some preconditioned iterative method. Our focus is on linear systems arising
from the discretization of second order partial di#erential equations, which often are
structurally symmetric (or very nearly so) and have a zero-free diagonal. For these ma-
trices, it is usually possible to carry out an incomplete factorization without pivoting
for stability (that is, choosing the pivots from the main diagonal). Such properties are
preserved under symmetric permutations of A, but not necessarily under nonsymmetric
ones. Hence, we restrict our attention to symmetric permutations only. We stress
the fact that very di#erent conclusions may hold for matrices which are structurally
far from being symmetric, although we have little experience with such problems.
If A is structurally symmetric, the reorderings are based on the (undirected) graph
associated with the structure of A; otherwise, the structure of A A T is used. We
consider several iterative methods for nonsymmetric systems, including GMRES [43],
Bi-CGSTAB [48], and transpose-free QMR (TFQMR) [28]; for a description of these,
as well as a description of incomplete factorizations, see, e.g., [3], [42].
In this paper, we mainly concentrate on orderings originally devised for matrix
those used to reduce fill-in in the factors; see, e.g., [18] or [29]. We
want to call attention to the fact that a permutation of the variables (and equations)
# Received by the editors September 4, 1997; accepted for publication (in revised form) March 31,
1998; published electronically May 6, 1999.
http://www.siam.org/journals/sisc/20-5/32684.html
Computing Group (CIC-19), MS B256, Los Alamos National Laboratory, Los Alamos,
NM 87545 (benzi@lanl.gov). This work was supported in part by the Department of Energy through
grant W-7405-ENG-36 with Los Alamos National Laboratory.
# Department of Mathematics, Temple University, Philadelphia, PA 19122-6094 (szyld@math.
temple.edu). The work of this author was supported by National Science Foundation grant DMS-
9625865.
- Department of Computer Science, Leiden University, 2300 RA Leiden, The Netherlands (arno@
cs.leidenuniv.nl).
using reordering methods designed for direct solvers can have an important positive
e#ect on the robustness and performance of preconditioned Krylov subspace methods
when applied to nonsymmetric linear systems. This is especially the case if the matrices
are far from symmetric. This observation, although not completely new, is not
fully appreciated, to the point that some authors have concluded that direct solver
reorderings should not be used with preconditioned iterative methods; see section 2.
It is hoped that the present study will contribute to a reassessment of direct solver
reorderings in the context of incomplete factorization preconditioning. Furthermore,
we hope that the evidence of our experiments can set the stage for more widespread
use of these reorderings.
Many papers have been written on the e#ect of permutations on the convergence
of preconditioned Krylov subspace methods. The main contributions are surveyed,
together with some of our observations, in section 2. In section 3 we present our
numerical experiments and comment on those results. Finally, in section 4 we present
our conclusions.
2. Overview of the literature. The influence of reorderings on the convergence
of preconditioned iterative methods has been considered by a number of authors.
Several of these papers are concerned with symmetric problems only [8], [12], [14], [21],
[24], [27], [36], [37], [38], [45], [49]. In this context, Du# and Meurant [21] have performed
a very detailed study of the e#ects of reorderings for preconditioned conjugate
gradients, i.e., in the symmetric positive definite case. Based on their extensive exper-
iments, they concluded that the number of iterations required for convergence with
direct solver reorderings is usually about the same as, and sometimes considerably
higher than, with the natural (lexicographic) ordering. An important observation in
[21] is that the number of conjugate gradient iterations is not related to the number
of fill-ins discarded in the incomplete factorization (as conjectured by Simon [44]) but
is almost directly related to the norm of the residual matrix
is an incomplete Cholesky factor of A; see [1] for a rigorous derivation of this result
under appropriate conditions. Throughout the paper we abuse the notation and use
A to denote both the original matrix and the permuted one. Similarly, -
L and -
U refer
to the incomplete factors of A or those of P T AP , depending on the context.
As it can be seen in the experiments in section 3, for the test matrices that are
nearly symmetric, our observations are in agreement with those of Du# and Meurant:
the reorderings have no positive e#ect on the convergence of the preconditioned Krylov
methods. On the other hand, for the highly nonsymmetric test matrices, i.e., when
the nonsymmetric part is large, we conclude that reorderings can indeed make a big
di#erence. Permutations that appear to be ine#ective for the (nearly) symmetric case
turn out to be very beneficial, often improving the robustness and performance of
the preconditioned iteration dramatically. (It is worth emphasizing that in [21], only
symmetric problems are considered.)
In addition, we will see that for problems that are strongly nonsymmetric and/or
are far from being diagonally dominant, the norm of the residual matrix R alone is
usually not a reliable indicator of the quality of the corresponding preconditioner. It
has been pointed out, e.g., in [11], that a more revealing measure of the quality of the
preconditioner can be obtained by considering the Frobenius norm of the deviation
of the preconditioned matrix from the identity, i.e., #I - A( -
. Note that
this quantity is equal to #R( -
Even if R is small in norm, it could happen
that ( -
U) -1 has very large entries, resulting in a large deviation of the preconditioned
matrix from the identity. As a result, the preconditioned iteration fails to converge. A
1654 MICHELE BENZI, DANIEL B. SZYLD, AND ARNO VAN DUIN
notable example of this phenomenon occurs when convection-dominated convection-
di#usion equations are discretized with centered finite di#erences. When the natural
(lexicographic) ordering is used, the incomplete triangular factors resulting from a
no-fill ILU factorization tend to be very ill conditioned, even if the coe#cient matrix
itself is well conditioned. Allowing more fill-in in the factors, e.g., using ILU(1) or
ILUT instead of ILU(0), will solve the problem in some cases but not always. This
kind of instability of ILU factorizations was first noticed by van der Vorst [47] and
analyzed in detail by Elman [25].
We will see that in some cases, reordering the coe#cient matrix before performing
the incomplete factorization can have the e#ect of producing stable triangular factors,
and hence more e#ective preconditioners. Generally speaking, #R#F and #R( -
do not contain enough information to describe in a quantitative fashion the behavior
of incomplete factorizations. In particular, it is not possible in general to establish
comparisons between incomplete factorizations based on these quantities alone. This
is not surprising, considering that for general nonsymmetric problems it is not known
how to predict the rate of convergence of iterative solvers. In practice, however,
one can expect that very ill-conditioned incomplete -
L and -
U factors will result in a
poor preconditioner. As suggested in [11], an inexpensive way of detecting this ill-conditioning
is by computing #( -
denotes a vector of all ones. This
is only a lower bound for #( -
but is quite useful in practice.
The e#ects of permutations on preconditioned Krylov subspace methods for non-symmetric
problems have been considered in [9], [13], [15], [16], [17], [22], [33], [44],
[46]. Some authors have concluded that the reorderings designed for sparse direct
solvers are not recommended for use with preconditioned iterative methods; see, e.g.,
[14], [33], [44]. Simon [44] used quotient minimum degree and nested dissection [29] in
conjunction with an ILU preconditioner for some oil simulation problems and found
essentially no improvement over the original ordering. He wondered if this strategy
would be advantageous for other kinds of problems. Similar conclusions were reached
by Langtangen [33], who applied a minimum degree reordering with ILU(0) preconditioning
of matrices arising from a Petrov-Galerkin formulation for convection-di#usion
equations. It should be mentioned that neither the problems considered by Simon nor
those considered by Langtangen exhibited any kind of instability of the incomplete
triangular factors and that most of those problems can be regarded as fairly easy
to solve, at least by today's standards. Dutto [22], in the context of a specific application
(solving the compressible Navier-Stokes equations with finite elements on
unstructured grids), was possibly the first to observe that minimum degree and other
direct solver reorderings can have a positive e#ect on the convergence of GMRES with
preconditioning. This is mostly consistent with some of our own experiments
reported here.
In the context of oil reservoir simulations, reverse Cuthill-McKee and some variants
of it were found to perform satisfactorily for symmetric, strongly anisotropic
problems due to the fact that these orderings are relatively insensitive to anisotropies
[4], [12], [49]. Numerical experiments indicating that the D2 diagonal ordering (which
is a special case of Cuthill-McKee) for ILU(k) preconditioning of certain nonsymmetric
problems defined on rectangular grids can be superior to the natural ordering
were reported in [5], but this observation did not receive the attention it deserved.
This may be due in part to the fact that the authors use a terminology that is peculiar
to the field of reservoir simulation. A short paragraph mentioning that level
set reorderings (like reverse Cuthill-McKee) can be useful for preconditioned iterative
methods can be found in [42], and similar remarks are no doubt found elsewhere in
the literature. In spite of this, reorderings for direct methods are still widely regarded
as ine#ective (or even bad) for preconditioned iterative methods.
We also mention the minimum discarded fill (MDF) algorithm (see [13], [14]),
which takes into account the numerical values of the entries of A. This method can
be very e#ective, but it is often too expensive to be practical, except for rather simple
problems.
In addition to reorderings which were originally designed for direct methods, one
can use the permutation produced by the algorithm TPABLO [10], which also uses the
magnitude of the entries of the matrix. This algorithm produces a permuted matrix
with dense diagonal blocks, while the entries outside the blocks on the diagonal have
magnitude below a prescribed threshold. Hence, like MDF, the TPABLO reordering is
based both on graph information and on the numerical values. The original motivation
for the TPABLO algorithm was to produce good block diagonal preconditioners, and
also blocks for the treatment of certain Markov chain problems [10]; see also [23]. It
turns out that this reordering is also useful for point incomplete factorizations, where
it is often better than the natural ordering [6] and some of the reorderings considered
in this paper [7]. However, for most cases treated in this paper, the performance of
TPABLO is inferior to that of some of the reorderings designed for direct methods,
and thus we do not report results with it here. TPABLO might prove useful in the
context of block incomplete factorizations, but this topic is outside the scope of the
present paper.
Finally, a number of papers have considered other kinds of reorderings, such as
those motivated by parallel computing (e.g., multicoloring; see [8], [15], [27], [37]) and
reorderings based on the physics underlying the discrete problem being solved [9].
Such reorderings can be very useful in practice, but they are strongly architecture
and problem specific.
3. Numerical experiments. In this section we show, by means of numerical
experiments, that direct solver reorderings can be very beneficial when solving difficult
nonsymmetric linear systems (obviously, at best, little gains can be expected
from reordering problems which are easily solved with the original ordering). The results
reported here are a representative selection from a large number of experiments
with nonsymmetric matrices arising from the numerical solution of partial di#er-
ential equations. In the first subsection we focus on an important class of problems
(convection-di#usion equations discretized with finite di#erences), while in the second
we present a selection of results for matrices from a variety of applications. In the last
subsection we investigate reasons why reordering improves the performance of the pre-
conditioners. All the experiments were performed on a Sun Ultra SPARC workstation
using double precision arithmetic. Codes were written in standard Fortran-77.
3.1. Convection-di#usion equations. A source of linear systems which can
be challenging for iterative methods is the following partial di#erential equation in
the open unit
#x
#y
(1)
with homogeneous Dirichlet boundary conditions. Equation (1) has been repeatedly
used as a model problem in the literature; see, e.g., [39]. The problem is discretized
using centered di#erences for both the second order and first order derivatives with
leading to a block tridiagonal linear system of order 1024 with
nonzero coe#cients. While this is a small problem size, it exhibits the features
we wish to address here. Numerical experiments were also performed with finer grids
and with a three-dimensional analogue of the same PDE, and the results obtained
were similar to those reported in Tables 1-4 below. For the right-hand side we used
a random vector. Results similar to those in Tables 1-4 were obtained with other
choices of b. In all our experiments we used v guess and we stopped
the iterations when the 2-norm of the (unpreconditioned) residual b - Av k had been
reduced to less than 10 -4 or when a maximum number of iterations was reached. The
parameter # > 0 controls the di#culty of the problem-the smaller # is, the harder it is
to solve the discrete problem by iterative methods. For our experiments, we generated
linear systems of increasing di#culty, corresponding to #
The coe#cient matrix A becomes more nonsymmetric (and less diagonally dominant)
as # decreases. If we denote by S and T the symmetric and the skew-symmetric part
of A, respectively, then for #
gets smaller, the norm of T remains unchanged, whereas the norm of S decreases. For
We point out incidentally that these quantities
are invariant under symmetric permutations, so the departure from symmetry cannot
be altered simply by reordering the matrix; however, we will see that from the point
of view of incomplete factorization preconditioning, some orderings are less sensitive
to the departure from symmetry than others.
There are many possible ways of implementing the (reverse) Cuthill-McKee and
minimum degree reorderings. We used Liu's multiple minimum degree algorithm [34],
and for the Cuthill-McKee reorderings, we used an implementation which chooses
a pseudoperipheral node as starting node and sorts nodes in the same level set by
increasing degree [29]. Other strategies are possible as well, and di#erent choices may
lead to somewhat di#erent results. We note here that we experimented also with one-way
and nested dissection reorderings, and for discrete convection-di#usion problems
the results found were comparable to those obtained with the multiple minimum
degree reordering, both from the point of view of fill-in in the incomplete factors and
from the point of view of convergence rates. For this reason we do not show these
results in the tables.
We report the results of experiments with the following accelerators: Bi-CGSTAB,
TFQMR, and GMRES with restart parameter 20. The preconditioners used were
standard incomplete factorizations based on levels of fill (ILU(0) and ILU(1); see [35])
and Saad's dual threshold ILUT; see [41], [42]. For ILUT, we used two di#erent sets
of parameters, (10 -2 , 5) and (10 -3 , 10). The latter results in a very powerful but
expensive preconditioner, containing up to five times the number of nonzeros in A.
Right preconditioning was used in all cases.
In
Tables
1-4 we present the number of iterations for the di#erent orderings and
the three Krylov subspace methods used in this study. In these tables, and in the
ones that follow, n/o stands for natural (or original) ordering, CM for Cuthill-McKee,
RC for reverse Cuthill-McKee, and MD for multiple minimum degree. The symbol
indicates that convergence was not achieved in 250 iterations for Bi-CGSTAB and
TFQMR, which require two matrix-vector products and applications of the preconditioner
per iteration, and in 500 iterations for GMRES(20), which requires only one
matrix-vector product and preconditioner application per iteration. In each case the
number in bold indicates the run which required the least amount of work. This is
not always the same as the run which required the least number of iterations because,
Table
Number of iterations for di#erent orderings, preconditioner ILU(0).
300 17 19 28 27 26 27 79 15 15 15
28
43
Table
Number of iterations for di#erent orderings, preconditioner ILU(1).
28 8 8
900 142 12
in general, di#erent reorderings result in preconditioners with a di#erent number of
nonzeros (except, of course, for ILU(0), where the number of nonzeros is always equal
to the number of nonzero entries in A). For this particular class of matrices, the
amount of fill-in in the incomplete factors is often highest for the natural ordering.
Cuthill-McKee and reverse Cuthill-McKee result in comparable fill-in (slightly less,
on average, than with the natural ordering), while minimum degree produces the least
amount of fill-in. For example, in the case # 100, the ILUT(10 -2 , 5) factors contain
12907 nonzeros with the natural ordering, 10271 nonzeros with Cuthill-McKee,
10143 nonzeros with reverse Cuthill-McKee, and 9038 nonzeros for multiple minimum
degree. As # gets smaller, these values slowly increase (more or less uniformly for all
orderings). For # we have 14695 nonzeros for the natural ordering, 15023
for Cuthill-McKee, 14259 for reverse Cuthill-McKee, and 9348 for multiple minimum
degree.
We now comment on the numerical results. We notice that for the moderately
nonsymmetric problems (smaller values of # -1 ), the alternative permutations o#er little
or no advantage over the natural ordering. In particular, the Cuthill-McKee and
reverse Cuthill-McKee reorderings produce nearly the same results as the natural
ordering. It is known (see, e.g., [50, sections 5.5 and 5.6]) that for five-point stencils
and no-fill factorizations, the Cuthill-McKee reorderings are equivalent to the natural
ordering in the sense that the incomplete factors of the permuted matrix are just the
permuted incomplete factors of the original matrix. Hence the ILU(0) preconditioners
with the natural ordering and Cuthill-McKee reorderings are mathematically equiv-
alent. This is true, however, only if the starting node is the same for both types of
Table
Number of iterations for di#erent orderings, preconditioner ILUT(10 -2 , 5).
Table
Number of iterations for di#erent orderings, preconditioner ILUT(10 -3 , 10).
orderings. For our implementation of (reverse) Cuthill-McKee, this is not the case in
general. Hence, there are some di#erences in the number of iterations obtained. We
note that these discrepancies are more pronounced for larger values of # -1 , suggesting
that the sensitivity of ILU(0) preconditioning to the choice of the starting node
becomes stronger as the matrix becomes increasingly nonsymmetric and farther from
being diagonally dominant.
For larger values of #, minimum degree causes a serious degradation of the convergence
rate, especially with ILU(0). Notice that the di#erences in behavior are not
as pronounced with the ILUT preconditioners, which use a drop tolerance.
As the coe#cient matrix becomes increasingly nonsymmetric, however, things
change. While the number of iterations increases for all reorderings, the rate of
increase is not the same for all reorderings, suggesting that some orderings are less
sensitive than others to the degree of nonsymmetry. For ILU(0), the natural ordering
and the Cuthill-McKee reorderings exhibit the worst degradation as # decreases. The
best performance is achieved with minimum degree, which is the only reordering
which caused all three iterative solvers to converge on all problems. The situation is
quite di#erent with ILU(1) and the ILUT preconditioners. With ILU(1), the natural
ordering performs poorly, minimum degree is only slightly better, but the Cuthill-McKee
reorderings are both quite good. With ILUT(10 -2 , 5) minimum degree is very
bad and the Cuthill-McKee orderings are both excellent. With ILUT(10 -3 , 10), which
gives very good (but expensive) approximations of A, all orderings produce e#ective
preconditioners, but the performance is particularly good with the Cuthill-McKee
reorderings. Notice that reverse Cuthill-McKee is only slightly better than Cuthill-
McKee. It is worth mentioning that for the natural, cuthill-McKee, and reverse
cuthill-McKee orderings the number of nonzeros in the ILUT(10 -3 , 10) factors is
approximately 35% of the number of nonzeros in the complete LU factors computed
with the same ordering; for multiple minimum degree, this proportion goes up to about
58%. For the more powerful preconditioners (ILU(1) and ILUT) the cuthill-McKee
reorderings give the best results for all values of #. As far as the relative performance
of the three Krylov subspace solvers is concerned, we observe that they are more
or less equivalent for this particular class of problems, with GMRES(20) requiring
fewer matrix-vector products and preconditioner applications than the other solvers
in many cases.
This set of problems was generated using second order, centered di#erence approximations
for both the second and first partial derivatives in (1). It is well known
that for large values of h/#, this discretization can become unstable. Alternative dis-
cretizations, such as those which use upwinding for the first order terms, do not su#er
from this problem and give rise to matrices with very nice properties from the point
of view of iterative solutions, such as diagonal dominance. However, this may not be
true for nonuniform grids (see [30]), and, moreover, such approximations are only first
order accurate and in many cases are unable to resolve fine features of the solution,
such as boundary layers. In this case, as suggested in [31], a uniform coarse grid could
be used to determine the region where the boundary layer is located (this corresponds
to "wiggles" in an otherwise smooth solution). This would require solving linear systems
such as those considered in the previous set of experiments. Subsequently, a local
mesh refinement can be performed in the region containing the boundary layer. This
solves the instability problem, and the approximation is still second order accurate
(except at a few points on the interface between the coarse and the fine grids), but the
resulting linear system, like the one corresponding to the uniform grid, can be quite
challenging for iterative methods if convection is strong. Again, a simple reordering
of the coe#cient matrix can improve the situation dramatically. To illustrate this, we
take the following example from Elman [26]. Consider the following partial di#erential
equation
#x
#y
(2)
and the right-hand side g and the boundary conditions are determined
by the solution
e 2P (1-x)
e 2P
e 2Py
e 2P
This function is nearly identically zero
in# except for boundary layers of width
O(#) near coarse grid was used in the
region where the solution is smooth, and a uniform fine grid was superimposed on
the regions containing the boundary layers, so as to produce a stable and accurate
see [26] for details.
We performed experiments with These values are
considerably larger than those used in [26]. The resulting matrices are of order 5041
and 7921, with 24921 and 39249 nonzeros, respectively. The convergence criterion used
was a reduction of the residual norm to less than 10 -6 ; the initial guess, right-hand
side, and maximum number of iterations allowed were the same as for the previous
set of the experiments. When ILU(0) preconditioning was used, no iterative solver
Table
Number of iterations for di#erent orderings and preconditioners.
Preconditioner P n/o CM RC MD n/o CM RC MD n/o CM RC MD
Table
Test problem information.
Matrix N NZ Application Source
engineering Harwell-Boeing
ale1590 1590 45090 Metal forming simulation S. Barnard
utm1700b 1700 21509 Plasma physics SPARSKIT
fidap007 1633 54487 Incompressible flow SPARSKIT
converged within the maximum allowed number of iterations, independent of the re-ordering
used. However, with minimum degree the three solvers appeared to be slowly
converging, whereas with the other reorderings the iteration either diverged or stag-
nated. The results for ILU(1) and ILUT preconditioning and various orderings are
reported in Table 5. We note that the natural ordering and the cuthill-McKee re-orderings
produced the same or comparable amount of fill-in for all preconditioners,
whereas multiple minimum degree resulted in higher fill-in with ILU(1) and considerably
less fill-in with the ILUT preconditioners with respect to the other reorderings.
From these results, we observe that reorderings do not have a great impact on the performance
of ILU(1) when In contrast, reorderings make a di#erence when
used with the ILUT preconditioners, with cuthill-McKee producing the best results.
Reverse cuthill-McKee is much better than the natural ordering but is not quite as
good as cuthill-McKee. Multiple minimum degree is bad with ILUT(10 -2 , 5) but it
performs well with ILUT(10 -3 , 10), although it is not as e#ective as cuthill-McKee.
Notice that for preconditioners fail when the natural ordering is used.
For this particular example cuthill-McKee dramatically improves the performance of
ILUT preconditioners. Finally, we mention that similar results were obtained with
recirculating flow problems in which the coe#cients of the first order terms in the
convection-di#usion equation have variable sign.
3.2. Miscellaneous problems. The results in the previous subsection are relative
to an important, but nevertheless rather special, class of problems. It is not clear
to what extent, if any, those observations can be applied to other problems. For this
reason, we discuss additional experiments performed on a selection of nonsymmetric
matrices from various sources, including the Harwell-Boeing collection [19] and Saad's
SPARSKIT [40]. These matrices arise from di#erent application areas: oil reservoir
modeling, plasma physics, neutron di#usion, metal forming simulation, etc. Some of
these matrices arise from finite element modeling, and they have a much more complicated
structure than those of the previous subsection. Also, they tend to be more
ill conditioned. Some information about the matrices is provided in Table 6, where
Table
Number of iterations for di#erent orderings, preconditioner ILU(0).
Bi-CGSTAB GMRES TFQMR
Matrix n/o CM RC MD n/o CM RC MD n/o CM RC MD
Table
Number of iterations for di#erent orderings, preconditioner ILU(1).
Bi-CGSTAB GMRES TFQMR
Matrix n/o CM RC MD n/o CM RC MD n/o CM RC MD
watt2 28 31 15 17 17 17
ale1590 22
N is the order of the matrix and NZ is the number of nonzeros.
The degree of di#culty of these problems varies from moderate (watt2) to extreme
(utm5940 and fidap007). Concerning problem ale1590, which was provided by
Barnard [2], the original ordering caused the coe#cient matrix to have some zero
entries on the main diagonal. This may cause trouble for the construction of ILU
preconditioners. Therefore, the matrix was first reordered into a form with a zero-
using a nonsymmetric permutation described in [20]. As for problem
this matrix was extracted from the AUGUSTUS unstructured mesh
di#usion package developed by Michael Hall at Los Alamos National Laboratory; see
[32]. It should be mentioned that analogous results to those reported here for ker-
shaw60x60 were obtained with di#erent matrices extracted from this package. The
convergence criterion used here is a residual norm reduction to less than 10 -9 , due to
the greater di#culty of these problems. All the remaining parameters are the same
as those used in the previous subsection.
The ILUT parameters were (10 -2 , 5) and (10 -3 , 10) for the first four matrices (as
in the experiments in Tables 3 and 4), whereas di#erent parameters had to be used for
the last three problems, due to their di#culty. For the matrix utm3060 the parameters
used were (10 -3 , 10) and (10 -5 , 20); for utm5940, (10 -3 , 30) and (10 -4 , 40); and for
fidap007, (10 -5 , 50) and (10 -7 , 70). In Tables 9 and 10 we refer to ILUT with these
two sets of parameters as ILUT1 and ILUT2, respectively.
The minimum degree reordering always produced the least amount of fill-in in
the preconditioner, while the original ordering and the cuthill-McKee reorderings
usually gave comparable fill-in. A notable exception is fidap007: for this matrix,
the cuthill-McKee reordering resulted in considerably more fill-in in the incomplete
factors than the original ordering or reverse cuthill-McKee. In Tables 9 and 10 the
indicates that the maximum storage allowed for the preconditioner has been
exceeded before the preconditioner construction was completed. This corresponds to
approximately 300,000 nonzeros in the incomplete factors. In the tables, n/o refers to
Table
Number of iterations for di#erent orderings, preconditioner ILUT1.
Bi-CGSTAB GMRES TFQMR
Matrix n/o CM RC MD n/o CM RC MD n/o CM RC MD
Table
Number of iterations for di#erent orderings, preconditioner ILUT2.
Bi-CGSTAB GMRES TFQMR
Matrix n/o CM RC MD n/o CM RC MD n/o CM RC MD
watt2
ale1590
the original ordering of the matrices. With GMRES, the restart parameter used was
all cases except for kershaw60x60, for which, due to the amount of storage
used.
It should be mentioned that for the matrices from plasma physics, the standard
implementation of the cuthill-McKee reorderings caused a breakdown (zero pivot)
of the ILU(1) and ILUT factorizations. Hence, it is possible for these reorderings to
produce a poor pivot sequence. This di#culty was circumvented by applying to these
matrices a slightly di#erent version of (reverse) cuthill-McKee, in which the first
node is chosen as the initial node (that is, there is no search of a pseudoperipheral
node), and no attempt is made to order nodes within a level set by increasing degree.
Instead, the order of the nodes is determined by the order in which they are traversed;
see [18]. With this implementation, no zero or very small pivots were encountered.
The results with these matrices are reported in Tables 7-10. They are somewhat
less clear-cut than those for the convection-di#usion problems. Nevertheless, it can
be seen that reorderings helped in a large majority of cases. While cuthill-McKee
and minimum degree did not perform well with ILU(0) and ILU(1), reverse cuthill-McKee
did with few exceptions. Reverse cuthill-McKee was also useful with the
ILUT preconditioners. For the more di#cult problems which could be solved only
allowing high amounts of fill-in in the factors, minimum degree proved useful.
We add that another version of ILUT, the ILUTP preconditioner (see [42]), was
also tried. This is ILUT combined with a column pivoting strategy, and it is known
to be sometimes better than ILUT, especially for problems leading to small pivots.
For this set of experiments, however, ILUTP was found to be no better than ILUT.
3.3. Further analysis of the results. The results of the experiments presented
show that a simple reordering of the coe#cient matrix can bring about a dramatic
improvement in the quality of incomplete factorization preconditioners. In particular,
we saw problems where all preconditioned iterative solvers failed with the natural
Table
Norms of A-
U and I -A( -
di#erent orderings, preconditioner ILU(0).
n/o CM RC MD
200 2.70e-01 1.09e+04 2.71e-01 9.74e+03 2.70e-01 1.10e+04 1.36e+00 4.89e+01
300 3.23e-01 8.89e+04 3.26e-01 8.15e+04 3.23e-01 9.08e+04 1.88e+00 7.42e+01
400 3.53e-01 2.68e+05 3.57e-01 2.73e+05 3.53e-01 2.74e+05 2.41e+00 9.97e+01
500 3.72e-01 5.28e+05 3.78e-01 1.01e+06 3.72e-01 5.40e+05 2.94e+00 1.25e+02
700 3.97e-01 1.34e+06 4.04e-01 8.60e+07 3.97e-01 1.30e+06 4.01e+00 1.75e+02
800 4.06e-01 3.26e+06 4.13e-01 4.86e+08 4.06e-01 2.79e+06 4.53e+00 2.00e+02
900 4.14e-01 9.93e+06 4.20e-01 1.98e+09 4.14e-01 8.25e+06 5.05e+00 2.25e+02
1000 4.22e-01 2.69e+07 4.26e-01 6.25e+09 4.22e-01 2.26e+07 5.57e+00 2.50e+02
Table
Norms of A-
U and I -A( -
di#erent orderings, preconditioner ILU(1).
n/o CM RC MD
100 4.60e-02 1.70e+00 4.90e-02 2.13e+00 4.59e-02 2.09e+00 4.21e-01 1.30e+01
200 1.31e-01 6.56e+00 8.57e-02 4.68e+00 7.94e-02 5.21e+00 5.49e-01 1.89e+01
300 1.79e-01 1.08e+01 1.09e-01 6.47e+00 9.56e-02 7.51e+00 6.95e-01 2.53e+01
400 2.09e-01 1.44e+01 1.30e-01 8.12e+00 1.06e-01 9.42e+00 8.49e-01 3.23e+01
500 2.29e-01 1.77e+01 1.50e-01 9.77e+00 1.12e-01 1.11e+01 1.01e+00 3.96e+01
700 2.55e-01 2.36e+01 1.86e-01 1.32e+01 1.22e-01 1.41e+01 1.33e+00 5.46e+01
800 2.64e-01 2.63e+01 2.02e-01 1.49e+01 1.26e-01 1.56e+01 1.49e+00 6.21e+01
900 2.71e-01 2.89e+01 2.17e-01 1.67e+01 1.29e-01 1.70e+01 1.66e+00 6.96e+01
1000 2.77e-01 3.14e+01 2.30e-01 1.85e+01 1.33e-01 1.83e+01 1.82e+00 7.71e+01
ordering and all converged rapidly after a symmetric permutation of the coe#cient
matrix. In this subsection, we investigate the reasons behind these observations.
An incomplete factorization preconditioner can fail or behave poorly for several
reasons. A common cause of failure is instability of the incomplete factorization,
which is caused by numerically zero pivots or exceedingly small ones. The result of
this type of instability is that the incomplete factorization is very inaccurate, that
is, the norm of the residual matrix
U is large. This is a very real possibility
for matrices that do not have some form of diagonal dominance and for highly
unstructured problems. Of course, an inaccurate factorization can also occur in the
absence of small pivots, when many large fill-ins are dropped from the incomplete
factors. Another kind of instability, which can take place whether or not small pivots
occur, is severe ill-conditioning of the triangular factors, which reflects the instability
of the long recurrences involved in the forward and backward solves when the pre-conditioning
is applied [25], [47]. In this situation, #R#F need not be very large, but
be. Again, this is a common situation when
the coe#cient matrix is far from being diagonally dominant. Of course, both types
of instabilities can simultaneously occur for a given problem; see [11] for an extensive
experimental study of the causes of failure of incomplete factorizations.
In order to gain some insight about the e#ect of reorderings, we computed the
Frobenius norms of R and R( -
U) -1 for each test matrix, reordering, and precon-
ditioner. Those for the matrices arising from the discretization of problem (1) are
reported in Tables 11-14, where
U#F and
Loosely speaking, N1 measures the accuracy of the incomplete factorization, whereas
Table
Norms of A-
U and I -A( -
di#erent orderings, preconditioner ILUT(10 -2 , 5).
n/o CM RC MD
100 9.20e-03 5.65e-01 4.96e-03 2.06e-01 2.57e-03 1.07e-01 1.39e-01 4.64e+00
200 3.09e-02 2.64e+00 5.22e-03 2.82e-01 4.71e-03 2.48e-01 3.13e-01 1.01e+01
300 5.64e-02 6.21e+00 1.13e-02 7.42e-01 8.58e-03 4.93e-01 5.08e-01 1.83e+01
500 1.00e-01 1.21e+01 2.81e-02 1.98e+00 1.68e-02 1.21e+00 9.98e-01 1.62e+02
700 1.43e-01 1.84e+01 5.20e-02 3.96e+00 2.43e-02 2.07e+00 2.03e+00 1.69e+03
800 1.78e-01 2.79e+01 6.21e-02 4.89e+00 2.83e-02 2.54e+00 7.39e+01 5.81e+06
900 6.57e+20 1.69e+33 9.88e-02 8.86e+00 3.22e-02 3.08e+00 9.37e+00 1.12e+05
1000 6.47e+28 3.98e+43 9.82e-02 8.33e+00 3.54e-02 3.52e+00 2.28e+01 3.71e+06
Table
Norms of A-
U and I -A( -
di#erent orderings, preconditioner ILUT(10 -3 , 10).
n/o CM RC MD
100 1.48e-03 8.56e-02 5.09e-04 2.05e-02 3.77e-04 1.63e-02 4.76e-02 1.62e+00
200 7.18e-03 4.99e-01 6.05e-04 2.93e-02 4.97e-04 2.68e-02 1.13e-01 3.85e+00
300 1.82e-02 1.36e+00 1.51e-03 9.48e-02 1.11e-03 6.76e-02 1.79e-01 6.24e+00
400 3.14e-02 2.49e+00 3.41e-03 2.36e-01 2.06e-03 1.39e-01 2.55e-01 9.00e+00
500 4.61e-02 3.78e+00 6.48e-03 4.78e-01 3.08e-03 2.28e-01 3.39e-01 1.21e+01
700 8.37e-02 8.56e+00 1.45e-02 1.15e+00 5.87e-03 5.01e-01 5.48e-01 1.92e+01
800 9.85e-02 1.08e+01 1.90e-02 1.55e+00 7.88e-03 6.98e-01 6.45e-01 2.29e+01
900 1.19e-01 1.43e+01 2.45e-02 2.08e+00 1.00e-02 9.34e-01 7.42e-01 2.68e+01
1000 1.48e-01 1.98e+01 2.96e-02 2.54e+00 1.18e-02 1.13e+00 8.39e-01 3.08e+01
N2 measures its stability (in the sense of [25]). We also monitored the size of the
pivots in the course of the incomplete factorizations, and we did not find any very
small pivots. Hence, failure or poor behavior of an incomplete factorization could be
due to significantly large fill-ins having been dropped, to unstable triangular solves,
or both.
The results in Table 11 give a clear explanation of the convergence behavior of
iterative methods with ILU(0) preconditioning reported in Table 1. For the natural,
cuthill-McKee, and reverse cuthill-McKee orderings the degradation and eventual
failure in the convergence as # -1 increases is not due to inaccuracy of the incomplete
factorizations, but to instability of the triangular solves. As # -1 increases, the
condition number of the no-fill incomplete factors grows rapidly, the preconditioned
becomes more and more ill conditioned, and the number of iterations
increases. Furthermore, for large enough # -1 , Elman [25] observed that the
symmetric part of A( -
becomes indefinite, and this in turn can cause failure of
the Krylov subspace accelerators. Inspection of the last column in Table 11 reveals
that minimum degree has the e#ect of stabilizing the ILU(0) triangular factors. The
preconditioner remains well conditioned even for large values of # -1 , and all three
Krylov subspace methods converge. The fact that the number of iterations still increases
with increasing # -1 appears to be due to the fact that the ILU(0) factorization
becomes less accurate, as measured by N1. We observe that for the moderately non-symmetric
problems the number of iterations is almost directly related
to
norm alone is not a good
indicator of the e#ectiveness of the preconditioner and
comes a more reliable indicator. Notice that minimum degree always results in less
accurate ILU(0) factors (larger N1) than the other orderings.
The results in Table 2 show that ILU(1) preconditioning is more robust and
e#ective than ILU(0) for this class of problems, and the Frobenius norms presented
in
Table
12 show that this is due to the fact that ILU(1) does not su#er from the
kind of instability that plagues ILU(0). Because of this,
U#F is now
a fairly accurate indicator of the performance of the preconditioner for all values
of # -1 . The values reported for N1 indicate that the cuthill-McKee reorderings
outperform the natural ordering because they result in more accurate ILU(1) factors
and that minimum degree is inferior to the other orderings because the incomplete
factorization is less accurate.
We mention that some ad hoc stabilization techniques to be used with ILU(0)
for convection-di#usion problems have been proposed in [26] and [47]. Our results
indicate that using ILU(1) with a level set reordering of the matrix o#ers a simple
solution to the instability problem.
Similar results apply to the ILUT preconditioners (Tables 13 and 14). However,
there are two phenomena that occur with ILUT(10 -2 , 5) and not with ILUT(10 -3 , 10)
that deserve to be mentioned. For # 900, the ILUT(10 -2 , 5) preconditioner with
natural ordering fails rather dramatically. An inspection of the corresponding entries
of
Table
13 shows that this is due to the simultaneous occurrence of inaccuracy in the
factorization (large #R#F -too many large fill-ins have been dropped) and instability
in the triangular factors (as revealed by a much larger value of #R( -
This is
an illustration of the fact that for strongly nonsymmetric problems, increasing fill-in
in the incomplete factors does not necessarily result in an improved preconditioner,
unless the factorization approaches an exact one; see also [11]. We mention that the
number of nonzeros in the factors is considerably higher for ILUT(10 -2 , 5) than for
ILU(1). The other interesting phenomenon is that with ILUT(10 -2 , 5), the incomplete
factors obtained with minimum degree are not only less accurate than those for the
other reorderings but also unstable when # becomes large. This is the opposite
of what happens for ILU(0) and confirms that the relative performance of a given
reordering is di#erent for di#erent preconditioning strategies, as already observed
in [21]. When ILUT(10 -3 , 10) is used, the accuracy of the incomplete factorization
approaches that of a direct solve and none of the orderings su#ers from instability
(note that the original coe#cient matrix A is fairly well conditioned for all values
of # -1 ). The norm of
U again becomes a very reliable indicator of the
performance of the preconditioners corresponding to the di#erent permutations. The
cuthill-McKee reorderings give better results than the other orderings because they
make the incomplete factorization more accurate, as is indicated by the values for N1
reported in Table 14. For fixed values of the ILUT parameters, the amount of fill-in
in the incomplete factors is only slightly less with the cuthill-McKee reorderings
than with the natural ordering, whereas the number of nonzeros in the complete LU
factors is much less. Hence, when compared to the natural ordering, the cuthill-McKee
reorderings allow one to compute a more accurate incomplete factorization for
roughly the same arithmetic and storage costs for this class of problems.
The Frobenius norms #R#F and #R( -
were also computed for the two
matrices arising from problem (2). For ILU(0), the failures with the natural ordering
and the cuthill-McKee reorderings were due to the concurrent e#ect of inaccuracy and
instability of the triangular solves (again, no small pivots arise for these problems).
The instability was especially severe for the case. For instance, with the
cuthill-McKee ordering we found On the
other hand, no instabilities occurred with minimum degree, and the failures with this
reordering were due to inaccurate factorizations
With minimum degree, the computed residual was reduced to about 10 -5 by all
three iterative methods preconditioned with ILU(0) when the maximum number of
iterations was reached. With the natural ordering and cuthill-McKee reorderings,
on the other hand, there was divergence or stagnation at much higher values of the
residual. Thus, it appears that instability in the preconditioner has a more devastating
e#ect than low accuracy of the factorization.
With ILU(1), no instabilities occurred. The failures with the natural ordering
and with multiple minimum degree for were due to low accuracy of the
factorization (large N1). With the ILUT preconditioners, the failures with the natural
ordering are due to the simultaneous occurrence of inaccuracy in the factorization and
unstable triangular solves, very much like the case in Table 13 for #
the other orderings produced stable incomplete factorizations. The failures with the
minimum degree ordering and ILUT(10 -2 , 5) preconditioning were due to inaccuracy
of the factorization.
Again, the best results are obtained with the cuthill-McKee reorderings and
ILUT preconditioning, which yield accurate and stable factorizations. We note that
for these problems, cuthill-McKee is somewhat better than reverse cuthill-McKee.
The Frobenius norms of R and R( -
were also computed for the problems of
section 3.2. In most cases, when considered together they were found to give a qualitative
explanation of the observed convergence behavior. Whenever a preconditioner
failed, it was usually due to inaccuracy rather than instability, with the exception
of fidap007 with ILUT preconditioning and cuthill-McKee reordering, for which the
factorization was both inaccurate and severely unstable.
4. Conclusions. We have provided evidence that reorderings originally designed
for use with sparse direct solvers can significantly improve the performance of iterative
methods preconditioned with incomplete LU factorizations. While not entirely new,
an examination of the literature reveals that this fact is not widely known.
The benefit of reordering the coe#cient matrix depends in part on how far the
matrix is from being symmetric and diagonally dominant, as well as on the type of
incomplete factorization preconditioner used. In our experiments with regular grid
problems, we found that when the coe#cient matrix is nearly symmetric, very little
is gained from reordering it. On the other hand, if the matrix is strongly nonsymmet-
ric, large reductions of the number of iterations can be obtained by (symmetrically)
reordering the matrix.
A somewhat surprising result of our experiments is that the "natural" or "orig-
inal" ordering of the test matrices used in this study is almost never the best from
the point of view of incomplete factorization preconditioning and is very often the
worst. More specifically, the original ordering was found to give the best results in
only 13 cases out of the 228 comparisons reported in this paper. Reverse cuthill-McKee
gave the best results in 132 cases, cuthill-McKee in 57 cases, and multiple
minimum degree in cases. The original ordering was found to be worse than the
other orderings also from the point of view of robustness: there were 61 failures with
the original ordering, 54 with multiple minimum degree, 48 with cuthill-McKee, and
only 37 with reverse cuthill-McKee. There were 26 cases where the original ordering
led to a failure and reverse cuthill-McKee succeeded, but only two cases where
reverse cuthill-McKee failed and the original ordering succeeded (matrix utm5940
with ILU(1) preconditioning; see Table 10).
It should be stressed that in most cases for which reverse cuthill-McKee was
not best, it still gave good results (that is, it was not found to be much worse than
the best ordering). Hence, overall, reverse cuthill-McKee appears to be superior
to the other orderings in the context of incomplete factorization preconditioning.
As revealed by a direct inspection of the residual matrices A -
U , in most cases
this was simply due to the fact that this reordering produced more accurate (as
measured by #A -
factorizations than those obtained with the
natural ordering, with a comparable amount of fill-in in the factors. In some cases the
improvement was due to the fact that the reordering resulted in a stabilization of the
incomplete triangular factors (as measured by #I-A( -
However, none of the
orderings considered in this paper were found to be completely immune from potential
instabilities in the corresponding triangular solves. In most cases where instabilities
occurred, the problem disappeared by allowing more fill-in in the incomplete factors,
but not always. Indeed, there were a few cases where increasing the amount of fill-in
made things worse in the sense that the instability of the factors increased; see also
[11].
In general, cuthill-McKee cannot be recommended. While it performed well
on many problems, its behavior is rather erratic. Reverse cuthill-McKee should be
preferred.
The minimum degree reordering was found to be inferior to the level set reorderings
in general, but often better than the original ordering. It is well known that for
the purpose of complete sparse factorization, minimum degree is usually much more
e#ective at preserving sparsity than level set reorderings. Thus, this reordering could
be useful when the LU factorization with the original ordering su#ers from extremely
high fill-in, and a sparse preconditioner is sought. For the same choice of the ILUT
parameters, this reordering always resulted in incomplete factors which were considerably
more sparse than those obtained with the other reorderings. While it is true
that minimum degree is more expensive to compute than the level set reorderings,
this cost is usually of the order of only a few iterations.
Of course, reverse cuthill-McKee is not trouble-free. The quality of the corresponding
preconditioner will be a#ected, in general, by the choice of the initial node
and by the ordering of nodes within level sets. In particular, it is not clear how di#er-
ent tie-breaking strategies will a#ect the incomplete factorization. It is also possible
that some orderings within level sets will produce a poor pivot sequence and a break-down
of the incomplete factorization process. Moreover, it is easy to contrive examples
where reverse cuthill-McKee will be a poor ordering, for example, by constructing
a convection-di#usion problem for which the reverse cuthill-McKee ordering of the
grid points goes against the flow direction.
Nevertheless, based on the results of our experiments, we conclude that much
can be gained from reordering strongly nonsymmetric matrices before performing an
incomplete factorization and not much should be lost, particularly when the reverse
cuthill-McKee reordering is used. For convection-di#usion problems on rectangular
grids, ILU(1) or ILUT preconditioning combined with reverse cuthill-McKee is rec-
ommended, whereas the lexicographic ordering behaves rather poorly and should be
avoided. For matrices that do not have a "natural" ordering, such as those arising
from unstructured meshes, we recommend reverse cuthill-McKee as the original or-
dering. A similar conclusion was reached in [21] for symmetric matrices arising from
the finite element method.
Concerning possible developments of this study, an interesting possibility would
be to consider a red-black approach, where the reduced system is reordered with
reverse cuthill-McKee. Some promising results with this approach were reported
in [5]. It would also be interesting to study the e#ects of combining nonsymmetric
permutations designed for moving large entries to the diagonal (see [20]) with the
symmetric permutations considered in this paper.
Finally, there are some open questions which warrant further investigation. As
already mentioned, some understanding of the e#ect of the choice of the initial node
and of the ordering within level sets on the performance of (reverse) cuthill-McKee
would be welcome. Also, with reference to the linear systems arising from the discretization
of model problem (1) or similar ones, it would be desirable to understand
why the ILU(0) factors computed with the minimum degree ordering do not su#er
from the instability that occurs when the natural ordering (or the equivalent level set
are used. Likewise, it would be instructive to understand why the ILU(1)
factors were found to be stable regardless of the ordering used to compute them; see
Table
2. At present, we are unable to see how Elman's analysis [25] for the ILU(0)
preconditioner with the natural and equivalent orderings could be applied to more
complicated preconditioners and to other orderings.
Acknowledgments
. We have benefited from the advice and assistance of several
colleagues during the writing of this paper. Howard Elman read and commented
on drafts of the paper during its early stages and o#ered many good suggestions.
Discussions with Wayne Joubert and Mike DeLong were also helpful. Hwajeong Choi
and Jacko Koster provided some of the codes which were used for the numerical
experiments. Special thanks go to Miroslav T-uma, who not only provided us with
some of his software but also was very generous in sharing his insight on sparse matrix
reorderings at various stages of this project. We are indebted to G-erard Meurant,
whose questions and detailed reading helped us turn report [7] into this paper. Part
of this research took place while the first author was with CERFACS and the second
author was a visitor there. CERFACS's support and warm hospitality are greatly
appreciated.
--R
Vectorizable preconditioners for elliptic di
A portable MPI implementation of the SPAI preconditioner in ISIS
Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods
Threshold ordering for preconditioning nonsymmetric problems
Orderings for Incomplete Factorization Precondi- tionings of Nonsymmetric Problems
Parallel elliptic preconditioners: Fourier analysis and performance on the connection machine
On preconditioned Krylov subspace methods for discrete convection- di#usion problems
Application of threshold partitioning of sparse matrices to Markov chains
Experimental study of ILU preconditioners for indefinite matrices
Weighted graph based ordering techniques for preconditioned conjugate gradient methods
SOR as a preconditioner
On parallelism and convergence of incomplete LU factorizations
A Graph-Theory Approach for Analyzing the E#ects of Ordering on ILU Preconditioning
Direct Methods for Sparse Matrices
Sparse matrix test problems
The design and use of algorithms for permuting large entries to the diagonal of sparse matrices
Parallelizable block diagonal preconditioners for the compressible Navier-Stokes equations
Analysis of parallel incomplete point factorizations
A stability analysis of incomplete LU factorizations
Relaxed and stabilized incomplete factorizations for non-self-adjoint linear sys- tems
A transpose-free quasi-minimal residual algorithm for non-Hermitian linear systems
Computer Solution of Large Sparse Positive Definite Systems
Diagonal dominance and positive definiteness of upwind approximations for advection di
Don't suppress the wiggles
A Second-Order Cell-Centered Di#usion Di#erence Scheme for Unstructured Hexahedral Lagrangian Meshes
Conjugate gradient methods and ILU preconditioning of non-symmetric matrix systems with arbitrary sparsity patterns
Modification of the minimum degree algorithm by multiple elimination
An iterative solution method for linear systems of which the coe
Ordering Methods for Approximate Factorization Preconditioning
Orderings for conjugate gradient preconditionings
Multicolor ICCG methods for vector computers
Preconditioning techniques for nonsymmetric and indefinite linear systems
SPARSKIT: A Basic Tool Kit for Sparse Matrix Computations
ILUT: A dual threshold incomplete LU factorization
Iterative Methods for Sparse Linear Systems
GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems
Incomplete LU preconditioners for conjugate-gradient-type iterative methods
Orderings for parallel conjugate gradient preconditioners
Iterative solution methods for certain sparse linear systems with a non-symmetric matrix arising from PDE-problems
Iterative Solution of Large Linear Systems
--TR
--CTR
E. Flrez , M. D. Garca , L. Gonzlez , G. Montero, The effect of orderings on sparse approximate inverse preconditioners for non-symmetric problems, Advances in Engineering Software, v.33 n.7-10, p.611-619, 29 November 2002
Michele Benzi, Preconditioning techniques for large linear systems: a survey, Journal of Computational Physics, v.182 n.2, p.418-477, November 2002 | nonsymmetric matrices;incomplete factorizations;preconditioned iterative methods;linear systems;krylov subspace methods;permutation of sparse matrices;reorderings |
326447 | The impact on retrieval effectiveness of skewed frequency distributions. | We present an analysis of word senses that provides a fresh insight into the impact of word ambiguity on retrieval effectiveness with potential broader implications for other processes of information retrieval. Using a methodology of forming artifically ambiguous words, known as pseudowords, and through reference to other researchers' work, the analysis illustrates that the distribution of the frequency of occurrance of the senses of a word plays a strong role in ambiguity's impact of effectiveness. Further investigation shows that this analysis may also be applicable to other processes of retrieval, such as Cross Language Information Retrieval, query expansion, retrieval of OCR'ed texts, and stemming. The analysis appears to provide a means of explaining, at least in part, reasons for the processes' impact (or lack of it) on effectiveness. | Introduction
A great many words in natural languages are ambiguous. The resolution
of ambiguity is a task that has concerned a great many
researchers in the field of Computational Linguistics. Over the
years, many programs have been built that, given a word appearing
in a certain context (with a definition of the word's possible
senses), attempt to identify the word sense in the context. Such
systems are known as word sense disambiguators. Early disam-
biguators were based on hand-built rule sets and only worked over
a small number of words and senses [Weiss 73], [Small 82]. This
changed, however, with the availability of dictionaries and thesauri
online. Using these reference works as a source of word sense definition
and information, many disambiguation systems were built
with the hope that they could be scaled up to work over a much
wider vocabulary[Lesk 86], [Wilks 90], [Sussna 93].
Such a possibility was of interest to researchers in the field of text
based Information Retrieval (IR) systems where it was thought that
word sense ambiguity was a cause of poor performance in the sys-
2tems. It was believed that if the words in a document collection
were correctly disambiguated, IR effectiveness would improve.
However, work where such disambiguation was performed failed
to show any improvement [Voorhees 93], [Wallis 93]. From these
results it becomes clear that research was needed to investigate the
relationship between sense ambiguity, disambiguation, and IR.
Investigations with similar aims but using different methods were
conducted by Krovetz & Croft [Krovetz 92] and by one of the
authors [Sanderson 94].
1.1 Krovetz & Croft
As part of a wide-ranging paper on disambiguation and IR,
Krovetz and Croft conducted a large-scale study on the relationship
of relevance to sense matches and mismatches between query
words and document words. Using the CACM and TIME test collections
([Salton 83], [Sparck Jones 76]), Krovetz & Croft performed
a retrieval for each of the queries in these collections. For
each retrieval, they examined the match between the intended
sense of each query word and that word's sense as it occurred in the
ten highest ranked documents. They counted the number of sense
mismatches between query and document words and examined this
figure in relation to the relevance or non-relevance of the docu-
ment. They found that when the document was not relevant to the
query, a sense mismatch was more likely to occur. From this anal-
ysis, it could be inferred that the level of sense match in the top
ranked relevant documents was high. Krovetz and Croft speculated
that this was due to the so-called query word collocation
effect, which can be explained through an example.
If one were to enter the single word query 'bank' into an IR sys-
tem, it is just as likely to retrieve economic documents as it is geographic
ones. If, however, one entered a query containing many
words, for example 'bank economic financial monetary fiscal'
for top ranked documents, it is likely that many of the query
words will collocate in those documents. It would be unlikely that
an occurrence of 'bank' in such a document would refer to the margin
of a river. Therefore, collocation can cause disambiguation to
be unnecessary. Krovetz and Croft also described a second cause
of the high degree of sense match in the top ranked documents,
which is explained in Section 4.1.
Krovetz and Croft's study did not predict any significant improvements
in retrieval effectiveness from the resolution of ambiguity in
document collections. Instead, they described a number of situations
where disambiguation may prove useful: where the effects of
collocation were less prevalent such as high recall searches; and
where query words were used in a less frequent sense.
1.2 Sanderson
Sanderson measured the effectiveness of an IR system retrieving
from the Reuters 22,173 test collection [Lewis 91] and then measured
it again after additional ambiguity was introduced into the
collection using artificial ambiguous words known as pseudo-words
. The drop in effectiveness resulting from their introduction
was a measure of the impact of that ambiguity. The results of the
experiments showed that the introduced ambiguity did not reduce
effectiveness as much as might have been expected. The published
analysis of the results [Sanderson 94] concentrated on the length
of queries showing that the effectiveness of retrievals based on a
query of one or two words was greatly affected by the introduction
2. Simulated words that have multiple senses. The manner of
their creation is explained in Section 2.
of ambiguity but much less so for longer queries. A confirmation
of the co-location effect shown by Krovetz and Croft.
Although it was not stated in the paper, query term co-location
within a document is also dependent on its length. If documents
being retrieved were particularly short (e.g. just document titles)
then co-location of query terms, regardless of query size, is likely
to be low. Therefore, in a situation of retrieving from short docu-
ments, one would expect to see the same impact of ambiguity on
retrieval effectiveness as was observed with short queries.
Sanderson also used pseudo-words to study the impact of automatic
disambiguation on retrieval effectiveness, concentrating particularly
on the impact of disambiguation errors. The number of
mistakes made by disambiguators appears to vary depending on the
subtlety of word sense to be discriminated between. A reasonable
sense selection error for a disambiguator performing the discrimination
task Sanderson assumed is around 20%-30% (taken from
[Ng 96] 3 ). He found that this level of error could cause effectiveness
to be as bad or even lower than when ambiguity was left unre-
solved. Disambiguation error was thought to be a likely cause of
the failures reported by Voorhees and by Wallis. To this end, Sanderson
concluded that a disambiguator was only of use in a retrieval
context if it disambiguated at a very high level of accuracy or if
queries (or documents) were very short.
3. Ng & Lee stated that the error rate of their disambiguator was
30%, however, it was trained and tested on manually
disambiguated corpora which themselves contained errors. This
will have impacted on the disambiguator accuracy, therefore, a
more generous error rate is suggested in this paper.
With reference to Sanderson's work, Sch-tze & Pedersen [Sch-tze
95] questioned the methodology of using pseudo-words to study
ambiguity after examining the pseudo-senses that make up a
pseudo-word. Looking at the distribution of these senses' frequency
of occurrence within a collection, they found that one
pseudo-sense of a pseudo-word typically accounts for the majority
of occurrences of that pseudo-word. They suggested, though did
not test, that this skewed frequency distribution of the pseudo-
senses within a pseudo-word was an additional cause of the low
impact of ambiguity on retrieval effectiveness. They further questioned
whether this type of distribution correctly reflected that
found in the senses of real ambiguous words. It is the two issues
they raised that are addressed in this paper.
First a description of pseudo-words is presented followed by examples
of their use in retrieval experiments. An analysis of the
skewed distribution of the frequencies of occurrence of a pseudo-
word's pseudo-senses is described next, along with an explanation
of how this type of distribution impacts on retrieval effectiveness.
Measurements are presented confirming that the senses of actual
ambiguous words have the same skewed distribution as pseudo-
senses and, therefore, pseudo-words are concluded to model well
this aspect of ambiguous words. Experimental results showing the
impact of pseudo-words on retrieval effectiveness are thus used to
describe the impact of actual ambiguous words on effectiveness.
Before concluding, the paper briefly examines additional applications
of the analysis presented which indicate that other processes
of retrieval may be better understood with such an analysis.
methodology using pseudo-words
The aim of Sanderson's experiments was to gain a greater understanding
of the relationship between word sense ambiguity, disambiguation
accuracy, and IR effectiveness. In order to achieve this,
it was necessary to be able to measure the amount of ambiguity in a
test collection. This was achieved by using a technique of adding
into the collection artificial ambiguous words called pseudo-words
4 .
A pseudo-word is formed by concatenating a number of words
chosen randomly 5 from a text collection. These words become the
pseudo-senses of a newly formed pseudo-word and all of their
occurrences within that collection are replaced by it: for example,
randomly choosing the words 'banana', `kalashnikov' & 'anec-
dote', and replacing all their occurrences in a collection by the
pseudo-word 'banana/kalashnikov/anecdote'. Note that pseudo-words
are mutually exclusive: a word can only be a member of one
pseudo-word.
By adding pseudo-words into a test collection in this manner, a
measurable amount of additional ambiguity is introduced into that
4. Pseudo-words were created by two groups in the same year
working independently of each other: Gale et al. and Sch-tze.
Gale, Church and Yarowksy introduced and tested a disambiguator
using pseudo-words in a 1992 paper [Gale 92c]. (In the following
year, Yarowsky [Yarowsky 93] incorrectly cited [Gale 92a] as
being the original pseudo-word paper.) At the same time, Sch-tze
introduced a means of testing a disambiguator using manually
created ambiguous words [Sch-tze 92], though he did not call them
pseudo-words. Note, that inspired by Sch-tze, Grefenstette
introduced the notion of artificial synonyms [Grefenstette 94].
collection and its impact on retrieval effectiveness can be deter-
mined. The size of pseudo-words can be varied by altering their
number of pseudo-senses. A pseudo-word with n senses is referred
to here as a size n pseudo-word.
2.1 What is meant by 'ambiguity'?
One aspect of ambiguity that was not addressed by Sanderson in
his paper, was the type of ambiguity simulated by pseudo-words.
This issue is described by Kilgarriff [Kilgarriff 97] who contended
that the distinction between senses of a word could only be defined
once the purpose for which they were intended was defined.
Assuming that a dictionary provides a definitive and objective distinction
between word senses is, according to Kilgarriff, unrealistic
(Section 5 discusses this issue further in the context of IR). In
Sanderson's work, pseudo-words were intended to mimic the
senses used in the work of Voorhees who used the WordNet thesaurus
[WordNet], [Miller 95] for her sense definitions. Indeed, it will
be shown later that an important quality of senses in this reference
work, are simulated well by pseudo-words. Unless otherwise
stated, references to senses and ambiguity in this paper should be
taken as meaning senses as defined in WordNet. It is believed,
however, that pseudo-words are a good simulation of senses
defined in other reference works such as dictionaries and some evidence
is presented to support this contention.
5. A pseudo random process was used based on a non-linear
additive feedback random number generator: the random and
srandom functions found in the math.h library of the C
programming language.
2.2 The experiments
To illustrate the impact of the introduction of pseudo-words into a
document collection, experiments on three conventional test col-
lections, CACM, Cranfield 1400 [Sparck Jones 76], and TREC-B,
are now presented. In these experiments, size five pseudo-words
were introduced into each collection and the effectiveness of an IR
system retrieving from these additionally ambiguous collections
was measured. All words in the collections and their respective
queries were transformed into pseudo-words 6 .
The CACM collection was composed of 3,204 documents and the
Cranfield 1400 collection contained 1,400. The TREC-B collection
used was that defined in the TREC-5 conference [Harman 96],
it contained ~70,000 documents; the queries used (known as topics
in TREC) were numbers 1 to 300.
The retrieval system used was a conventional ranking system using
a form of tf*idf term weighting scheme (1) which is an amalgam of
Harman's normalised within document frequency weight [Harman
92] and a conventional inverse document frequency measure. Stop
words were removed from the collection and the Porter stemmer
6. It was possible that up to four words in each collection were
left out of the transformation process due to the requirement that
each pseudo-word had five senses.
[Porter 80] was applied to the collection text before pseudo-words
were generated. A pessimistic interpolation technique (described
in the Interpolation section of Chapter 7 of Van Rijsbergen's book
[Van Rijsbergen 79]) was used to produce a set of precision values
measured at ten standard recall levels.
As can be seen from the results in Figures 1, 2, & 3, the effectiveness
resulting from this retrieval is little different from that resulting
from a retrieval on the unmodified collection. Considering that
(1)
log
length j
log
log
weight of term i in document j
frequency of term i in document j
length j number of terms in document j
N number of documents in collection
number of documents in which term i occurs
Figure
1. Introducing size five pseudo-words into
the CACM collection.
Figure
2. Introducing size five pseudo-words into
the Cranfield 1400 collection.0.250.75Precision
Recall
Size 5 pseudo-words
Unmodified
the introduction of size five pseudo-words reduced the number of
distinct terms in the collections to a fifth, the relatively small
decrease in retrieval effectiveness is perhaps striking. The differences
in reductions across the collections is most likely due to the
differences in query length. TREC-B queries are, on average, 41
non-stop words in length as opposed to 12 for the CACM collection
and 10 for the Cranfield 1400.
2.3
Summary
This section has presented a methodology that uses pseudo-words
to explore the relationship between ambiguity and retrieval effec-
tiveness. Experimental results showed the impact of introduced
word sense ambiguity on retrieval effectiveness was not as significant
as might have been thought. In addition, the results showed a
link between the length of query submitted to a retrieval system
and the impact of that ambiguity on effectiveness.
2.4 Postscript
Since conducting this work, it has come to light that experiments
with a similar methodology but with a different purpose were carried
out by Burnett et al. [Burnett 79] who were performing experiments
using document signatures. They were investigating how
best to generate small but representative signatures from a docu-
ment. One of their experiments involved randomly pairing
together words in the same way that size two pseudo-words are
formed. They noted that retrieval effectiveness was not affected
greatly by this pairing, a result that is in agreement with those presented
here.
3 Analysis of frequency distribution
Although the experiments of Section 2.2, and those presented in
[Sanderson 94], showed query (and by implication document)
length to be an important factor in the relationship between ambiguity
and retrieval effectiveness, further analysis, by Sch-tze &
Pedersen [Sch-tze 95] revealed that the skewed frequency distribution
of pseudo-senses could also be causing the relatively small
drops in effectiveness observed in those experiments. In this sec-
tion, this factor is analysed and experiments are conducted to
reveal its impact on retrieval effectiveness.
3.1 Examining the make up of pseudo-words
Words have very different frequencies of occurrence within a document
collection, as shown by Zipf [Zipf 49]. This can be demonstrated
by examining the text of the CACM collection which
contains approximately 7,500 distinct words occurring 100,000
times.
Figure
4 shows the distribution of the frequency of occurrence
of this set of words. The graph shows their distribution is
skewed. Such a distribution is often referred to as a Zipfian distri-
bution. Therefore, creating pseudo-words by random selection
from these words is likely to result in pseudo-words composed of
pseudo-senses with a similar (Zipfian) skew. This becomes apparent
after examining the frequency of occurrence of the senses of
Figure
3. Introducing size five pseudo-words into
the TREC-B collection.
four randomly selected pseudo-words generated from the CACM
collection:
. the senses of the size five pseudo-word '12/span/prospect/pre-
occupi/nonprogram' 7 occurred 218, 18, 3, 2, and 1 times in the
CACM collection respectively;
. the senses of 'assist/prohibit/minicomput/ness/inferior'
occurred 27, 5, 5, 2, and 1 times;
. the senses of 'taken/multic/purdu/beginn/pavlidi' occurred 28,
4, 2, and 1 times;
. and the senses of 'note/makinson/disappear/gilchrist/xrm'
occurred 97, 3, 2, 2, and 1 times.
7. The unusual ending of some of the words is due to the
application of a stemmer [Porter 80] to the words of the CACM
collection before formation of pseudo-words.
The extent to which this skewed distribution existed in pseudo-words
was more fully investigated: sets of size two, three, four,
five, and ten pseudo-words were created from the words of the
CACM collection, and the distribution of the frequency of occurrence
of their pseudo-senses, was examined. For each of these
pseudo-words, it was found that one sense accounted for the majority
of occurrences of the pseudo-word of which it was a part. The
results of this analysis are shown in Table 1 which displays the percentage
of occurrences accounted for by a pseudo-word's commonest
sense. From these figures, it was concluded that the distribution
of the frequency of occurrence of the pseudo-senses was skewed.
3.2 Why do skewed pseudo-words not affect
effectiveness?
An examination was undertaken to discover if the skewed frequency
distribution of pseudo-senses was in part responsible for
the retrieval results presented in Section 2.2. Initially, the frequency
of occurrence of test collection query words was examined.
It was found that the majority of these words had a relatively high
frequency of occurrence in their respective collections. This was
significant as, if a high frequency query word was made part of a
Figure
4. Distribution of the frequency of occurrence of words
in the CACM collection. Graph plotted on a logarithmic scale.
Point A shows that around 3,600 of the words (about half of all
words in the collection) occur in the collection only once. Point
B shows that one word occurs around 3,000 times in the
collection, accounting for 3% of all occurrences in the
collection.101000Words that have.
.this frequency of occurrence
A
No. of
senses
Commonest
sense
Table
1. Percentage of occurrences accounted for by
commonest pseudo-sense of a pseudo-word (computed by
micro averaging). The figures in brackets (shown for
comparison) are the percentages that would result if pseudo-
senses occurred in equal amounts. Measurements made on the
CACM collection.
pseudo-word, there was a high probability that the other pseudo-
senses of that pseudo-word would have low frequencies of occurrence
(because of skewed frequency distributions). Therefore, the
pseudo-word's commonest sense would account for the majority of
its occurrences and would, in effect, be little different from the
high frequency query word that was its main component. Conse-
quently, there would be little change in the retrieval effectiveness
of an IR system retrieving on that word.
To illustrate, query fourteen of the CACM collection contains the
word 'algorithm', which occurs in 1,333 documents. After pseudo-words
were introduced into the collection, this query word became
the commonest pseudo-sense of the word 'algorithm/telescop/
pomental/lanzano/mccalla', which occurred in 1,343 documents,
only ten more than the original query word. Therefore, turning this
relatively high frequency query word into a pseudo-word had little
impact on the word's frequency of occurrence and therefore, little
impact on its use in retrieval.
It was hypothesised that if the majority of query words were, like
'algorithm', the commonest sense of a pseudo-word, this would
help to explain the relatively small drop in retrieval effectiveness
resulting from the introduction of such words. To test this hypoth-
esis, size five pseudo-words were introduced into the CACM,
Cranfield 1400, and TREC-B collections and the number of query
words that were the commonest sense of a pseudo-word was
counted. As can be seen from the results in Table 2, the majority of
query words had this 'commonest sense' property. These results,
certainly suggested that the skewed frequency distribution of a
pseudo-word's pseudo-senses was an additional cause of the relatively
small drop in retrieval effectiveness found in the experiments
presented in Section 2.2.
To further confirm this, the experiments of Section 2.2 were
repeated exactly as before, except that the pseudo-words introduced
into the collection were of a different type: their pseudo-
senses had an equal frequency of occurrence 8 . The graphs in
Figures
show the difference in retrieval effectiveness
8. This type of pseudo-word was formed for a particular
document collection, by sorting that collection's word index by
(word) frequency of occurrence and then grouping contiguous sets
of sorted words into pseudo-words. This means of grouping
ensured that words with equal or almost equal frequency of
occurrence were joined together.
Number of CACM Cranfield TREC-B
queries 52 225 285
query words 645 2159 11848
query words
in collection
commonest
sense query
words
commonest
Table
2. Percentage of query words that were the
commonest sense of a pseudo-word.
Figure
5. Comparison of pseudo-word types on the
CACM collection.
AAAAAAA
when size five pseudo-words, whose senses have even distribu-
tions, are introduced into a collection. As can be seen, across all
three collections, the impact on effectiveness of introducing
pseudo-words with even distributions is significantly greater than
the introduction of pseudo-words with a skewed distribution.
From these results and the analysis of query words, it was concluded
that the relatively low impact of ambiguity reported in the
experiments of Section 2 was not only due to the relatively long
queries of the collections but also due to the skewed frequency distribution
of the pseudo-senses used in the experiments.
4 Do pseudo-words model ambiguous
words well?
Given the discussion so far, it would not be unreasonable to wonder
how well pseudo-words model ambiguous words. In other
words, do the senses of ambiguous words have the same skewed
distribution as pseudo-senses? There is a well known result from a
number of disambiguation researchers that suggests this is the case.
In their research on establishing a lower bound baseline for measuring
the significance of a disambiguator's accuracy, Gale et al.
[Gale 92a] found that if a disambiguator used a strategy of selecting
the commonest sense of a word, it would be correct 75% of the
time. More recently, Ng & Lee [Ng 96] reported on the creation of
a sense tagged corpus containing 191,000 word occurrences. They
found the commonest sense of the words they tagged (which had
on average nine senses per word) accounted for 64% of all occurrences
It is possible to measure the frequency distribution of word senses
using the SEMCOR sense tagged corpus which is publicly released
with WordNet [WordNet], [Miller 95]. It is a 100,000 word corpus
consisting of around 15,000 distinct words. All word occurrences
were manually tagged with senses as defined in the Wordnet thesaurus
(version 1.4). Using this corpus, the distribution of the frequency
of occurrence of ambiguous word senses can be plotted
Figure
8). Examining the graph in Figure 8 reveals that the senses
in the SEMCOR corpus have a skewed frequency distribution similar
to that of the words in the CACM collection as shown in
Figure
4.
Figure
6. Comparison of pseudo-word types on the
Cranfield 1400 collection.
Figure
7. Comparison of pseudo-word types on the
TREC-B collection.
AAAAAAA
AAAAAAA
AAAAAAA
AAAAAAA
AAAAAAA0.250.75Precision
Recall
AAA
AAA
Size 5 pseudo-words, even dist.
Size 5 pseudo-words, skewed dist.
Unmodified
As was done with pseudo-words, the distribution of the frequency
of occurrence of word senses within ambiguous words was
examined, Table 3 displays the percentage of occurrences
accounted for by a word's commonest sense. The percentage was
computed for separate sets of words, the set a word belongs to is
defined by the number of senses that word has. As can be seen, a
word's commonest sense accounts for the majority of that word's
occurrences, thus confirming the results of Gale et al. and of Ng &
Lee.
Tables
show a similarity. It was unexpected that the
frequency of occurrence of an ambiguous word's senses would be
modelled so well by the random word selection process used to
form pseudo-words. Although both senses and words were found
to have a similar skew within a collection, one might have
anticipated that the distribution of the senses of an ambiguous word
would be affected by other factors that would not be captured by
the pseudo-word creation process. This, however, did not seem to
be the case and therefore, it was concluded that the skewed
distribution of a pseudo-word's pseudo-senses is a good model of
this aspect of an ambiguous word 9 .
4.1 Does 'real' ambiguity impact on retrieval
effectiveness?
Given that the senses of ambiguous words have the same skewed
distribution as the pseudo-senses of pseudo-words, and that such a
distribution is an identified cause of the small drop in retrieval
9. The strength of similarity shown in Tables 1 & 3 may be co-
incidental, only more extensive tests on manually tagged corpora
(when available) will be able to confirm or deny this. Further, it
should be noted that a typical feature of most manually
disambiguated corpora, currently available, is the lack of multiple
manual assessments of their words. The observations of skew
described and referenced in this paper are largely based on corpora
having this property. It is well documented that levels of inter
assessor agreement in sense tagging tasks can be low [Gale 92a].
Therefore, it is possible that the skewed distribution of senses will
have different properties on multiply assessed corpora. Once such
resources are available, it may be prudent to re-examine this issue.
Figure
8. Distribution of the frequency of occurrence of senses
in the SEMCOR corpus. Graph plotted on a logarithmic scale.
No. of
senses
Size of
set
Commonest
sense
6 448 68 {17}
9 141
Table
3. Percentage of occurrences accounted for by the
commonest sense of a word (computed by micro averaging).
The figures in brackets (shown for comparison) are the
percentages that would result if senses occurred in equal
amounts. Measurements made on the SEMCOR corpus.1010000Word senses that have.
.this frequency of occurrence
effectiveness found in the experiments of Section 2.2, it might
seem reasonable to question if this small drop will also be found in
the case of real ambiguity. As was shown in the analysis of
pseudo-senses in Section 3.2, when the senses of a large percentage
of ambiguous query words have a skewed distribution and are
used in the commonest sense present in the collection (being
retrieved from), ambiguity will not greatly impact on effectiveness.
A short study of the senses of the query words of one hundred
TREC queries (No. 201-300) was undertaken to determine how
many words were used in their commonest sense. The short title
section of these queries were manually disambiguated (by one per-
son) with respect to the sense definitions of WordNet. This thesaurus
was chosen as it contains relative frequency of occurrence
information for senses, which allow one to determine the commonest
sense of any word in WordNet. Examining the nouns of these
queries, it was found that 158 out of 207 (76%) were used in their
commonest sense. (There were 11 nouns not found in WordNet or
whose intended sense was not defined.) For these queries at least,
the commonest sense predominates.
There is one caveat to the use of WordNet described here: the
measurement of the commonest sense is calculated from corpora
other than the collections the TREC queries were intended for.
Therefore, it is possible that a word's commonest sense in the
TREC collections is somewhat different from that defined in Word-
Net. In order to allow for this, it would be necessary to analyse the
frequency distributions of senses in both query and collection.
Such a study was performed in Krovetz & Croft's wide ranging
analysis of ambiguity and retrieval effectiveness [Krovetz 92].
They examined the patterns of usage of the senses of query words
in the CACM collection. Comparing their usage in the collection
as a whole to that found in the queries, they found a similar pattern
between the two: i.e. the commonest sense of a query word as used
in a collection was the commonest sense of that word in a query.
used this study to conclude that the skewed distributions
of ambiguous words were an important factor in the overall
low impact of ambiguity on retrieval effectiveness.
From their results and those of the short study described above, it
was concluded that pseudo-words accurately simulate the skewed
distribution of ambiguous words and the results drawn from
pseudo-word based retrieval experiments provide a good indication
of what will happen in the 'real case'.
4.2 Other aspects of ambiguity and pseudoword
Before drawing the discussion on pseudo-word simulation to a
close, it is necessary to address one other aspect of senses and its
simulation by pseudo-words, namely the relatedness of a word's
senses. Although there are some words, known as homographs or
homonyms, whose senses have no relationship (the geographic and
economic senses of 'bank' being one, oft-cited, example), the
majority of ambiguous words have senses that are related in some
manner. Such words are said to be polysemous.
Because of their random selection from a corpus, the pseudo-
senses of a pseudo-word have no relationships between them and it
is necessary to examine how important this deficiency of pseudo-words
is. The relationships between a word's senses can take one
of two forms. For one, the lack of pseudo-sense relatedness may
be important, for the other, it may be less so. The two are now discussed
4.2.1 Related but not important?
There are some words which have senses that are related but in a
manner that is of little importance when considering pseudo-
words. Two examples of this relationship are now described.
. Etymological relationships - senses that are related through
some historical link. The word 'cardinal' as a religious figure
and 'cardinal' as a mathematical concept are etymologically
related.
. Metaphorical relationships - 'surfing' as a water sport or 'surf-
ing' as a pursuit of browsing web pages.
Although the senses of these words are related, it is questionable
how important the relatedness is in relation to the accuracy of
pseudo-word simulation. The random formation of pseudo-words
brings together words that appear in different contexts and it could
well be that this is an accurate simulation of etymologically and
metaphorically related senses. One can imagine, that the two
senses of the word 'surf', will appear in quite different contexts
and so in that respect will be similar to the randomly selected
pseudo-senses of a pseudo-word.
This behaviour of ambiguous words has been observed in two anal-
yses. Yarowsky [Yarowsky 93] tested the hypothesis that a word's
different senses appear in different contexts. For the words he
examined, Yarowsky found the hypothesis to be "90-99% accurate
for binary ambiguities". Gale et al. [Gale 92b] examined the
broader context of senses showing that if a word is used in a particular
sense in a discourse, all other occurrences of the word in that
discourse were used in the same sense (this quality of senses was
mimicked by the pseudo-words used in the experiments of this
paper). From the two studies, it would appear that for the classes
of polysemous word described so far, the relatedness of senses may
not be as important as imagined.
4.2.2 Related and important
The applicability of the Yarowsky and Gale et al. studies may be
limited, however, as both works examined words which had a small
number of broadly defined and distinct senses. Many words have a
larger number of more tightly related senses and for these, the different
context and same discourse rules may not apply as consist-
ently. For example, it is not hard to imagine that the word 'French'
could refer to the language and the people within the same discourse
surrounded by similar contexts. For words of this kind, the
lack of pseudo-sense relatedness may be significant. It is not an
issue addressed by the experiments presented here. An examination
of this area is left for future work.
5 Applications of ambiguity analysis
From the work presented by the author in [Sanderson 94] and that
presented in this paper, it has been shown that sense ambiguity
impacts on retrieval effectiveness far less than was originally
thought. This does not mean, however, that ambiguity should be
ignored. It is believed that the factors of query/document length,
and of skewed distribution of senses can be used as a means of
assessing when ambiguity will be a significant problem to a
retrieval system and, therefore, suggest when some form of disambiguation
(on document or query) should be investigated.
Already the statements on the utility of disambiguation for short
queries have been supported through experimentation by Sanderson
[Sanderson 96] who showed a small improvement in effectiveness
for retrievals based on single word queries when documents
and queries were represented by word senses, identified by an automatic
disambiguator. Similarly, results of experiments investigating
manual disambiguation of short documents (image captions)
by Smeaton & Quigley [Smeaton 96] has also provided evidence
showing effectiveness improving for this type of retrieval.
The analysis of sense frequency distributions presented in this
paper provides an explanation for the results of Sch-tze & Pedersen
[Sch-tze 95] whose use of a disambiguator on large queries
and documents resulted in a 7-14% improvement in retrieval effec-
tiveness, the first published results showing an automatic disambiguator
working successfully with an IR system.
To understand the reasons for their results, which apparently contradict
those presented here, it is necessary to first explain how
Sch-tze & Pedersen's disambiguator worked. Unlike a `classic'
disambiguator, it did not use a dictionary or thesaurus as a source
of word sense definitions, instead it used only the corpus to be dis-
ambiguated. Its disambiguation method was as follows. For each
word in the corpus, the context of every occurrence of that word
within the corpus was examined and common contexts were clus-
tered. For example, given the word 'ball', one might find that
within a corpus of newspaper articles, this word appears in a
number of common contexts: a social gathering; and perhaps a
number of different sports (tennis, football, cricket, etc. For
Sch-tze & Pedersen's disambiguator, each one of these common
contexts constituted an individual sense of the word. This is what
is unusual about their disambiguator: the senses are quite unlike
those found in a dictionary. It is unlikely for instance, that a dictionary
would distinguish between different types of the sporting
sense of 'ball'. A further difference is that the disambiguator only
attempted to identify the commonest senses of a word: Sch-tze and
Pedersen stated that a common context was only identified as a
sense if it occurred more than fifty times in the corpus. So different
are Sch-tze & Pedersen's senses from the classic definition of the
word, that they are referred to here as word uses instead.
The differences between uses and senses identified here causes the
frequency distribution of word uses to be different from those of
word senses. The requirement that uses must occur at least fifty
times eliminates the very infrequent and therefore makes the frequency
distribution of uses less skewed. In addition, it is likely
that the commonest senses of a word will be employed in a number
of distinct uses: e.g. the sporting sense of 'ball', mentioned above,
written in tennis, football, and cricket contexts. The breaking up of
a word's commonest senses would have the effect of causing the
frequency distribution of word uses to be less skewed than those of
word senses.
From the results in Section 3.2 comparing the detrimental impact
on retrieval effectiveness caused by even and skewed distributions,
it was shown that even distributions impact more on effectiveness
than skewed. As it is believed that word uses have a less skewed,
and therefore more even, frequency distribution when compared to
word senses, it is concluded that the improvement in retrieval
effectiveness reported by Sch-tze & Pedersen is due to this difference
in the frequency distributions 10 .
5.1 Other skewed frequency analyses
In this section, other processes of IR are examined using the methodology
of analysing distributions of frequencies of occurrence in
relation to retrieval effectiveness. The examinations are brief and
10. As stated in the introduction, this explanation is suggested by
Sch-tze & Pedersen though not directly tested through
experimentation.
are intended only to show the potential of the analysis rather than
to provide a thorough study. Five areas are examined: document
signatures, stemming, Optical Character Recognition (OCR),
Cross Language Information Retrieval (CLIR), and query expansion
5.2 Document signatures
As was already shown in Section 2.4, pseudo-words have a potential
utility reducing the size of document signatures. Further investigation
of the relationship between signature size and retrieval
effectiveness, mediated through different forms of pseudo-words
may prove useful.
5.3 Stemming
The positive impact of stemming on retrieval effectiveness is at
best regarded as minimal. Harman [Harman 87] examined three
stemming techniques on a set of test collections and concluded that
stemming did not result in improvements in retrieval effectiveness.
In contrast, Krovetz [Krovetz 93] with his more sophisticated stemmer
showed a small but consistent and significant, improvement
over a number of test collections, in particular, those having short
documents. More recently, Xu and Croft [Xu 98], using a corpus
based enhancement to the Porter stemmer [Porter 80], have shown
further, but again small, improvements over Krovetz's stemmer.
One possible explanation for the relatively small improvements
brought about by stemming may lie in the skewed frequency of
occurrence of word stem variants. The process of stemming word
variants to a morphological root has similarities to the formation of
pseudo-words with the difference that stemming is intended to
improve retrieval effectiveness. Even with a cursory examination
of word variants as shown in Table 4, one can see that their frequency
of occurrence appears to follow a similar skew to that
found in the analysis of pseudo-words, although with the amount
of skew varying.
Stemming was also studied by Church [Church 95], who examined
the correlation (in terms of document co-occurrence) between the
variants of a word stem. Church presented his correlation measure
as a way of predicting the worth of stemming a particular word,
though it was not actually tested in retrieval experiments. By
examining the relative frequency of occurrence of stem variants, it
may be possible to complement Church's work by producing an
enhanced predictor of the worth of stemming.
5.4 OCR
Smeaton and Spitz [Smeaton 97] have examined a type of pseudoword
in OCR called a Word Shape Token (WST). These are words
composed of a seven letter alphabet, known as Character Shape
Codes (CSCs), into which the English 52 letter alphabet (capital
and lower case) is mapped based on characteristics of letter shape.
Smeaton and Spitz state that the advantages of recognising CSCs
over letters is an order of magnitude speed increase in recognition
along with greater accuracy. The disadvantage is that many words
are mapped to the same WST, making WSTs similar to pseudo-
words. The amount of concatenation varies depending on the let-
Variant Occs. Variant Occs.
water 121 wonderful 36
waters 31 wonder 28
wondering
watered 1 wondered 12
watering 1 wonders 3
Table
4. Frequency of occurrence of Porter stem
variants of the words 'water' and `wonder' as
measured in a small document collection.
ters of a word and on its length: longer words are, for example, less
likely to map to the same WST.
Smeaton and Spitz examined the impact on retrieval effectiveness
of retrieving from documents represented by WSTs instead of
words. Initial experiments conducted by them showed very large
reductions, however, they stated this was due to certain query
words mapping to the same WSTs with as many as a thousand
other words. Through a process of eliminating these massively
concatenated words from queries, the average number of query
words mapping to the same WST was around twenty and the
reduction in retrieval effectiveness compared to using just words
was approximately a half.
In the light of the work presented in this paper, it is anticipated that
an analysis of frequency distributions of the component words of
WSTs may provide indications of how better to choose which
WSTs should be eliminated from a query in order to maximise
retrieval effectiveness.
Given the evidence, already presented, on the skewed frequency
distribution of senses defined in dictionaries and thesauri, it would
seem reasonable to wonder if CLIR systems using translation dictionaries
will be good candidates for an analysis of the frequency
distribution of the possible translations of words. If for example,
the distribution of a word's translations were mostly skewed, and in
general its commonest translation was the correct one, then it may
be possible that translating a word into all its possible translations
would not harm retrieval effectiveness by much. In their experi-
ments, Hull and Grefenstette [Hull 96] used a translation dictionary
and reported that using a strategy of concatenating a word's
possible dictionary translations produces retrieval effectiveness
that "performs quite well given its simplicity". However, they also
state that introducing incorrect translations "seriously hurts per-
formance". From such statements, it is not really possible to determine
much about frequency distribution and ambiguity.
In order to gain an additional understanding about the use of translation
dictionaries in CLIR, an analysis of one dictionary, the Collins
English-Spanish bilingual machine readable dictionary (as
described in [Ballesteros 97]), was conducted by measuring the
frequency of occurrence of the English translations of Spanish
words. Using a method similar to that performed in Section 4 on
the sense tagged SEMCOR corpus, the translations were grouped
into sets based on the number of translations there were for a Spanish
word. The frequency of occurrence of the English translations
(after being stemmed) was measured in the TREC-B collection,
1991-93. The results are shown in Table 5. As can be seen, for
each of the sets, on average, the commonest translation accounts
Number of
translations
Size of
set
Commonest
translation
6 625 53 {17}
8 268 44 {13}
Table
5. Percentage of occurrences accounted for by the
commonest translation of a Spanish word (computed by
micro averaging). The figures in brackets (shown for
comparison) are the percentages that would result if
translations occurred in equal amounts. Measurements
made using the Collins Spanish to English dictionary and
the TREC-B collection.
for the vast majority of occurrences. The dominance of the commonest
sense is less strong than that shown in pseudo-words and
word senses, but, nevertheless, it is present. This short analysis,
seems to suggest that the skewed frequency of occurrence of the
possible translations of a word in some part accounts for the relative
success of the simplistic translation strategy reported by Hull
and Grefenstette.
5.6 Query expansion
The automatic expansion of query words with words chosen from a
thesaurus or dictionary has not been successful. Voorhees [Voor-
hees 94] tried automatically expanding the words of TREC queries
with related words taken from the WordNet thesaurus [WordNet]
without success. One of the unusual qualities of TREC queries is
their great length (on average 41 non-stop words per query) and
one might speculate that the reasons for Voorhees lack of success
can be attributed to this feature: perhaps the expansion terms were
poor quality and added little to the already large number of terms.
At TREC-6 a new task was introduced: the very short query task,
ad hoc retrieval based on the title section of TREC queries which
were on average 2.5 non-stop words in length.
It was hypothesised that because the queries were shorter, expansion
techniques like that tried by Voorhees may be more successful.
Therefore, such a method was attempted. The one chosen was a
semi-automatic form that required the manual identification of the
sense of each query word followed by the automatic expansion of
the identified senses with synonyms taken from the WordNet the-
saurus. The motivations and results of the experiment are
described in detail in the report to TREC-6 [Crestani 97]. The
main conclusion was that even with the short queries, the expansion
method did not improve retrieval effectiveness over a strategy
of leaving the query alone. Over the 45 queries tested 11 (queries
251-300), this strategy was found to leave 14 unchanged, improve
8 queries, and degrade 23. In the light of the frequency distribution
work reported in this paper, however, a possible improvement to
the expansion process was hypothesised.
Perhaps query words used in their commonest sense did not need
expansion as their sense would be so prevalent in the collection
anyway. If, however, a query word was used in one of its less common
senses, expansion might prove useful in ensuring that documents
containing that sense were ranked highly. (Assuming of
course that the expansion words were used in those documents in
their 'correct' sense.)
A repeat of the TREC experiment to test this hypothesis was conducted
on the TREC-B collection using the same 45 queries
described above. Expansion was conducted in the same manner:
manually identifying the sense of query words and expanding less
common senses with synonyms taken from WordNet. Information
on the frequency of occurrence of word senses was gained from
WordNet. Although this may not reflect the frequencies of occurrence
found in the document collection, it was hoped that this
information would be accurate enough for the purposes of this
experiment.
Using the strategy of only expanding the less common senses of
query words on the TREC queries resulted in 36 queries being left
unchanged, 4 improved, and 5 degraded. The increased number of
unchanged queries was not surprising given that fewer expansions
11. Five of the fifty queries had no relevant documents and were
ignored in this experiment.
took place. The ratio of improved to degraded queries changed
from around 1:3 to almost 1:1, although the degradation from the
five queries was worse than the improvement from the four. Never-
theless, the study appeared to indicate that the strategy of targeting
query words using a less common sense was promising, though
obviously one that required improvement before it could be
employed in any retrieval system.
6 Conclusions
In this paper, a series of experiments were presented that measured
and analysed the impact on retrieval effectiveness of word sense
ambiguity. All of these experiments applied a methodology that
used a form of artificial ambiguity known as pseudo-words. The
skewed frequency distribution of pseudo-senses was described and
shown to be one important factor in the impact of ambiguity on
effectiveness. Experiments and analyses were presented that
showed the skew to be a good model of the frequency distribution
of the senses of actual ambiguous words. This provided further
evidence that conclusions drawn from experiments based on
pseudo-words are applicable to cases of real ambiguity.
Through further experimentation and reference to previous work it
was confirmed that the self-disambiguating nature of long queries
and the skewed frequency distribution of the word senses are both
factors in the low impact on retrieval effectiveness of word sense
ambiguity. Recent research on disambiguation and IR was analysed
and the two factors were shown to play an important role in
explaining the success of these approaches.
Finally, a number of additional processes of retrieval were examined
in the light of knowledge about skewed frequency distribu-
tions. These analyses appeared to provided insight into the reasons
for these processes' impacts on retrieval effectiveness.
Acknowledgements
The authors wish to thank Ian Ruthven, Mounia Lalmas, Bob
Krovetz and the reviewers for their comments on earlier drafts of
this paper. This work was supported by the VIPIR project which
was funded by the University of Glasgow.
--R
Ballesteros 97 L.
Burnett 79 J.
Church 95 K.
Gale 92b W.
Gale 92c W.
Harman 87 D.
Harman 92 D.
Harman 96 D.
Hull 96 D.
Kilgarriff 97 A.
Krovetz 92 R.
Krovetz 93 R.
Lesk 86 M.
Miller 95 G.
Ng 96 Hwee Tou Ng
Rijsbergen 79 C.
Sanderson 94 M.
Sanderson 96 M.
Small
Smeaton 96 A.
Smeaton 97 A.
Sparck Jones 76 K.
Sussna 93 M.
Voorhees 93 E.
Voorhees 94 E.
Weiss 73 S.
Wilks 90 Y.
WordNet 'www.cogsci.princeton.edu/~wn/'.
Miller (Principal Investigator).
--TR
A failure analysis of the limitation of suffixing in an online environment
Ranking algorithms
Representation and learning in information retrieval
Lexical ambiguity and information retrieval
Using WordNet to disambiguate word senses for text retrieval
Viewing morphology as an inference process
Word sense disambiguation for free-text indexing using a massive semantic network
Query expansion using lexical-semantic relations
Word sense disambiguation and information retrieval
One term or two?
WordNet
Querying across languages
Experiments on using semantic distances between words in image caption retrieval
Phrasal translation and query expansion techniques for cross-language information retrieval
Corpus-based stemming using cooccurrence of word variants
Automatic sense disambiguation using machine readable dictionaries
Extended Boolean information retrieval
Explorations in Automatic Thesaurus Discovery
Information Retrieval
Using Character Shape Coding for Information Retrieval
--CTR
I. Nakov , Marti A. Hearst, Category-based pseudowords, Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology: companion volume of the Proceedings of HLT-NAACL 2003--short papers, p.67-69, May 27-June 01, 2003, Edmonton, Canada
Sang-Bum Kim , Hee-Cheol Seo , Hae-Chang Rim, Information retrieval using word senses: root sense tagging approach, Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, July 25-29, 2004, Sheffield, United Kingdom
Christopher Stokoe, Differentiating homonymy and polysemy in information retrieval, Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, p.403-410, October 06-08, 2005, Vancouver, British Columbia, Canada
Athanasios Kehagias , Vassilios Petridis , Vassilis G. Kaburlasos , Pavlina Fragkou, A Comparison of Word- and Sense-Based Text Categorization Using Several Classification Algorithms, Journal of Intelligent Information Systems, v.21 n.3, p.227-247, November
Mike Thelwall, Text characteristics of English language university Web sites: Research Articles, Journal of the American Society for Information Science and Technology, v.56 n.6, p.609-619, April 2005 | word sense disambiguation;pseudowords;word sense ambiguity |
327790 | A Parallel Algorithm for Mesh Smoothing. | Maintaining good mesh quality during the generation and refinement of unstructured meshes in finite-element applications is an important aspect in obtaining accurate discretizations and well-conditioned linear systems. In this article, we present a mesh-smoothing algorithm based on nonsmooth optimization techniques and a scalable implementation of this algorithm. We prove that the parallel algorithm has a provably fast runtime bound and executes correctly for a parallel random access machine (PRAM) computational model. We extend the PRAM algorithm to distributed memory computers and report results for two- and three-dimensional simplicial meshes that demonstrate the efficiency and scalability of this approach for a number of different test cases. We also examine the effect of different architectures on the parallel algorithm and present results for the IBM SP supercomputer and an ATM-connected network of SPARC Ultras. | Introduction
. Unstructured meshes have proven to be an essential tool in the numerical
solution of large-scale scientific and engineering applications on complex computational domains. A
problem with such meshes is that the shape of the elements in the mesh can vary significantly, and
this variation can affect the accuracy of the numerical solution. For example, for two-dimensional
triangulations classical finite element theory has shown that if the element angles approach the
limits of 0 the discretization error or the condition number of the element matrices can
be adversely affected [3, 12].
Such poorly shaped elements are frequently produced by automatic mesh generation tools, particularly
near domain boundaries. In addition, adaptive refinement techniques used during the
solution of a problem tend to produce more highly distorted elements than were contained in the
initial mesh, particularly when the adaptation occurs along curved boundaries [18].
To obtain high-quality meshes, often one must repair or improve the meshes before or during the
solution process. This improvement should be based on an element quality measure appropriate for
the particular problem being solved. Two mesh improvement techniques that have proven successful
on sequential computers are face (edge) swapping and mesh smoothing [2, 6, 7, 8, 15, 16, 22]. How-
ever, sequential mesh optimization methods are not appropriate for applications using distributed-memory
computers because (1) the mesh is usually distributed across the processors, (2) the mesh
may not fit within the memory available to a single processor, and (3) a parallel algorithm can
significantly reduce runtime compared with a sequential version. For such applications, parallel
algorithms for mesh improvement techniques are required, and in this paper we present an efficient
and robust parallel algorithm for mesh smoothing.
We have organized the paper as follows. In Section 2, we briefly review various local mesh
smoothing techniques, including Laplacian smoothing and a number of optimization-based ap-
proaches. The parallel algorithm and theoretical results for correct execution and the parallel
runtime bound are discussed in Section 3. In Section 4, we present numerical results obtained
on the IBM SP and an ATM-connected network of SPARC Ultras that demonstrate the scalability
of our algorithm.
2. Local Mesh-Smoothing Algorithms. Mesh-smoothing algorithms strive to improve the
mesh quality by adjusting the vertex locations without changing the mesh topology. Local smoothing
algorithms adjust the position of a single grid point in the mesh by using only the information at
Assistant Computer Scientist, Mathematics and Computer Science Division, Argonne National Laboratory, Ar-
gonne, IL.
y Assistant Professor, Computer Science Department, The University of Tennessee at Knoxville, Knoxville, TN.
z Computer Scientist, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL.
incident vertices rather than global information in the mesh. A typical vertex, v, and its adjacent
set, adj(v), are shown in Figure 2.1. The vertices in the adjacent set are shown as solid circles in the
figure. As the vertex v is moved, only the quality of the elements incident on v, shown as shaded
triangles in the figure, are changed. Vertices not adjacent to v, shown as unfilled circles, and the
quality of elements that contain these vertices are not affected by a change in the location of v. One
or more sweeps through the mesh can be performed to improve the overall mesh quality. Thus, it is
critical that each individual adjustment be inexpensive to compute.
Fig. 2.1. A vertex v and the elements whose quality is affected by a change in its position. The neighbors of v
are shown as solid circles. Only the quality of the shaded elements is affected by changing the position of vertex v.
To be more specific, we can represent any local smoothing technique as a function, smooth(),
that given the location x v of a vertex v, and its neighbors' locations, x adj(v) , returns a new location,
Thus, the sequential form of any local mesh smoothing algorithm is given by the simple
loop in Figure 2.2, where V is the set of vertices in the mesh to be smoothed. The positions of
Choose an ordering Vn
For do
Enddo
Fig. 2.2. The local smoothing algorithm for sequential implementation
the vertices after a sweep is not unique and is determined by the ordering in which the vertices
are smoothed. This aspect of local mesh smoothing techniques will be discussed in more detail in
Section 2.4.
The action of the function smooth is determined by the particular local algorithm chosen, and
in this section we briefly review several previously proposed techniques.
2.1. Laplacian Smoothing. Perhaps the most commonly used local mesh-smoothing technique
is Laplacian smoothing [9, 20]. This approach replaces the position of a vertex v by the
average of its neighbors' positions. The method is computationally inexpensive, but it does not
guarantee improvement in element quality. In fact, the method can produce an invalid mesh containing
elements that are inverted or have negative volume. An example showing how Laplacian
1 The smoothing function might require information in addition to neighbor vertex position. For example, for
nonisotropic problems the function may require the derivatives of an approximate solution at v and adj(v), or other
specific information about the elements that contain these vertices. However, this information is still local and can
be included within this framework.
smoothing can lead to an invalid mesh is shown in Figure 2.3.
Fig. 2.3. A set of elements for which Laplacian smoothing of the center vertex v results in an invalid triangu-
lation. The shaded square marks the average of the positions of the vertices adjacent to v.
A variant of Laplacian smoothing that guarantees a valid or improved mesh allows the vertex v
to move only if (1) the local submesh still contains valid elements or (2) some measure of mesh quality
is improved. We note that evaluating these rules significantly increases the cost of the Laplacian
smoothing technique [10].
2.2. Optimization-based Smoothing. Optimization-based smoothing techniques offer an
alternative to Laplacian smoothing that can be inexpensive, can guarantee valid elements in the
final mesh, and are effective for a wide variety of mesh quality measures. Several such techniques
have been proposed recently, and we briefly review those methods now. The methods differ primarily
in the optimization procedure used or the quantity that is optimized.
Bank [4] describes a smoothing procedure for two-dimensional triangular meshes that uses the
element shape quality measure given by
where A is the area of the triangular element and l i is the length of edge i. The maximum value for
q(t) corresponds to an equilateral triangle. Each local submesh is improved by using a line search
procedure. The search direction is determined by the line connecting the current position of v to the
position that results in the worst element becoming equilateral. The line search terminates when
at least one other element's shape quality value equals that of the improving element. One variant
of this technique attempts to directly compute the new location by using the two worst elements in
the local submesh. In this case the line search procedure is used only in the cases for which the new
position results in a third element, different from the original two worst elements, with the smallest
shape measure.
Shephard and Georges describe a similar approach for tetrahedral meshes [23]. The shape
function for each element incident on v is computed by using the formula
where V is the volume of the element and A i is the area of face i. The parameter - is chosen
so that q(t) has a maximum of one corresponding to an equilateral tetrahedron. A line search
similar to that done by Bank is performed, where the search direction is determined by the line
connecting the current position of v to the position that improves the worst element in the local
submesh to equilateral. The line search subproblem is done by using the Golden Section procedure
and terminates when the worst element is improved beyond a specified limit.
Freitag et al. [10, 11] propose a method for two- and three-dimensional meshes based on the
steepest descent optimization technique for nonsmooth functions. The goal of the optimization
approach is to determine the position that maximizes the composite function
1-i-l
where the functions f i are based on various measures of mesh quality such as max/min angles and/or
element aspect ratios and l is the number of functions defined on the local submesh. For example, in
two-dimensional triangular meshes, maximizing the minimum angle of a local submesh containing
evaluations. For most quality measures of interest,
the functions are continuous and differentiable. If the derivatives of the composite function OE(x) are
discontinuous, the discontinuity occurs when there is a change in the set of functions that obtain the
minimum value. The search direction at each step is computed by solving a quadratic programming
problem that gives the direction of steepest descent from all possible convex linear combinations of
the gradients in the active set. The line search subproblem is solved by predicting the points at
which the set of active functions will change based on the first-order Taylor series approximations
of the f i (x).
Amenta et al. show that the optimization techniques used in [10, 11] are equivalent to the
generalized linear programming technique and has an expected linear solution time [1]. The convex
level set criterion for solution uniqueness of generalized linear programs can be applied to these
smoothing techniques, and they determine the convexity of the level sets for a number of standard
mesh quality measures in both two and three dimensions.
All the techniques mentioned previously optimize the mesh according to element geometry.
Bank and Smith [5] propose two smoothing techniques to minimize the error for finite element
solutions computed with triangular elements with linear basis functions. Both methods use a damped
Newton's method to minimize (1) interpolation error or (2) a posteriori error estimates for an elliptic
partial differential equation. The quantity minimized in these cases requires the computation of
approximate second derivatives for the solution on each element as well as the shape function q(t)
for triangular elements mentioned previously.
2.3. Combined Laplacian and Optimization-based Smoothing. Both Shephard and
Georges [23] and Freitag and Ollivier-Gooch [10] present experimental results that demonstrate
the effectiveness of combining a variant of Laplacian smoothing with their respective optimization-based
procedures. The variant of Laplacian smoothing used by Shephard and Georges allows the
vertex to move to the centroid of the incident vertices only if the worst element maintains a shape
measure q(t) above a fixed limit. Otherwise, the line connecting the centroid and the initial position
is bisected, and the bisection point is used as the target position. Freitag and Ollivier-Gooch accept
the Laplacian step whenever the local submesh is improved. In both cases, the Laplacian smoothing
step is followed by optimization-based smoothing for only the worst elements. Experiments in [10]
showed that using optimization-based smoothing when the minimum angle (dihedral angle in 3D)
was less than degrees in two dimensions and 15 degrees in three dimensions significantly improve
the meshes at a small computational cost. These results also showed that more than three sweeps
of the mesh offer minimal improvements for the meshes tested.
2.4. Nonuniqueness of Smoothed Vertex Location. As mentioned earlier, the locations of
the vertices in the mesh after a pass of smoothing are not unique but are determined by the ordering
in which the vertices are smoothed. An example of this nonuniqueness is shown in Figure 2.4 for a
simple two-dimensional mesh. The original mesh is shown on the left, where v and q are the vertices
to be smoothed and the position of each vertex is given. In the top series of meshes, the vertex q
is relocated by using optimization-based smoothing as described in [11] followed by adjustment of
the vertex v as shown by the highlighted submeshes in the middle and rightmost meshes. In the
bottom series of meshes, the vertices are smoothed in reversed order, and the resulting final meshes
are considerably different. For each of these final meshes, the resulting minimum, maximum, and
average angles for the two orderings are presented in Table 2.1. The higher-quality mesh is obtained
by moving the vertex q before moving the vertex v.
v'
q'
q'
q'
Fig. 2.4. The order in which vertices are smoothed can significantly affect the final mesh quality. These series
of meshes show the intermediate and final meshes when the vertex q is smoothed followed by the vertex v (top) and
vice versa (bottom).
Table
Minimum, maximum, and average angles for the the mesh shown in Figure 2.4 for a single pass of optimization-based
smoothing with two different orderings of vertices
Ordering Min. Angle Max. Angle Avg. Angle
Original Mesh 1:736
v then q 10:445
q then v 19:038
In general, vertices incident on poor-quality elements are the most likely to significantly change
location during the smoothing process. These large changes can adversely affect the quality of
neighboring submeshes, but the effects can be mitigated by subsequent adjustment of the neighboring
vertices. Therefore, an ordering of vertices that would tend to be more effective than a random
ordering would be to smooth the vertices incident on the elements with the lowest quality first.
3. A Parallel Mesh-Smoothing Algorithm. In this section we present a framework for the
correct parallel implementation for any of the local mesh-smoothing algorithms presented in the
preceding section. The parallel smoothing algorithm is formulated within the context of the graph
of the mesh which we define as follows. Let ng be the set of vertices in the mesh
and mg be the set of mesh elements, either triangles or tetrahedra. Let E)
be the graph associated with the mesh, where g.
We first consider the problem of coordinating information about the mesh between processors to
ensure that the mesh remains valid during smoothing. An invalid mesh can be created by smoothing
two adjacent vertices simultaneously on different processors. Consider the triangulation shown in the
first mesh in Figure 3.1 in which the vertices q and v are to be smoothed and are owned by different
processors. The new locations of the vertices after simultaneously being smoothed are indicated in
the following mesh by v 0 and q 0 . These positions are determined assuming the locations of q and v
are fixed to those given in the first mesh. The shaded region in the second mesh shows the inverted
triangle that was created by the new locations v 0 and q 0 .
We define correct execution of the parallel algorithm as follows. Let the quality of an initial,
valid mesh T 0 be q 0 . The parallel algorithm has executed correctly if the smoothed mesh, T 1 , is
valid and the quality q 1 is greater than or equal to q 0 . Note that we do not require that the quality
of a mesh smoothed in parallel equal the quality of the same mesh smoothed in serial, because a
different vertex ordering may be used.
v'
q'
Fig. 3.1. An example of an invalid mesh created when adjacent vertices on different processors are smoothed
simultaneously. The inverted triangle is indicated in the shaded region.
Because elements not incident on v are not affected by a change in location of vertex v, we can
ensure the correct execution of the parallel algorithm by preventing two vertices that are adjacent
in the mesh, but on different processors, from being simultaneously smoothed. We define an independent
set of vertices to be a subset of the mesh vertices, I, such that v i adj(I). The
approach for the parallel smoothing algorithm is to (1) select an independent set of mesh vertices,
(2) smooth these vertices in parallel, and (3) notify their neighbors of their new position so that
the procedure can be repeated with a new independent set. This approach avoids synchronization
problems between processors. We first formulate the algorithm using a Parallel Random Access
Machine (PRAM) computational model for which we can prove algorithm correctness and give a
parallel runtime bound. We then formulate a practical variant for distributed memory architectures.
3.1. The PRAM Computational Model. For the PRAM computational model, we assume
that processors communicate through a common shared memory. The essential aspect of the PRAM
model used in our analysis is that a processor can access data computed on another processor and
stored in shared memory in constant time.
Using this model, we assume that we have as many processors as we have vertices to be smoothed
and that vertex v i is assigned to processor p i . The parallel algorithm that gives the correct implementation
of mesh smoothing is given in Figure 3.2.
The minimum number of steps required for correct execution of the parallel PRAM algorithm
is given by Lemma 1.
Lemma 1 The number of steps required to guarantee correct execution of the smoothing algorithm
is at least joe opt j, where oe opt is the coloring of G = (V; E) such that joe opt j is minimal among
all colorings of G.
Proof. In the parallel smoothing algorithm, a set of vertices, I, is smoothed at each time step.
If for any two vertices in I, v j and v k , e jk exists, then two neighboring vertices will be smoothed
be the initial set of vertices marked for smoothing
While
Choose an independent set I from S k
For each v 2 I do
Enddo
Endwhile
Fig. 3.2. The PRAM parallel smoothing algorithm.
simultaneously; as shown earlier, this may result in an invalid mesh or a mesh with lesser quality
than that of the initial mesh. Guaranteeing correct execution requires that I be an independent set.
The algorithm requires that a disjoint sequence of such independent sets, I 1 ; I 2 ; :::; I m , be found
such that the [ j I thus the parallel smoothing algorithm requires m steps. Such a sequence
of independent sets is an m-coloring of G. By definition, m must be at least joe opt j.
Determining this optimal coloring for a general graph is known to be an NP-hard problem [13],
but effective heuristics for efficiently choosing the independent sets in parallel have been developed
and implemented. We now describe two such heuristic approaches: (1) a vertex coloring method,
and (2) a randomization scheme. The coloring method assumes that we have a coloring of the
vertices, oe, that is not necessarily optimal, but is a labeling such that oe(v) 6= oe(u) if u 2 adj(v).
Clearly, vertices of the same color constitute an independent set and can be used for this purpose
in the parallel algorithm. If the maximum degree of the graph is \Delta, then the number of colors
found by these coloring heuristics is bounded above by \Delta + 1. The second approach is based on
the assignment of a distinct random number, ae(v), to each vertex. At each step in the algorithm,
we choose an independent set I from S according to the rule given in [17] based on [21]: v 2 I if
The coloring approach yields a running time bound independent of the size of the graph being
smoothed; however, the efficient parallel computation of this coloring requires the use of the randomized
algorithm [17]. Therefore, the coloring approach is cost effective only if it is used enough times
to amortize the initial expense of computing the coloring or is maintained for some other purpose.
Because we typically use a small number of smoothing passes, the randomized approach is used in
the experimental results presented in the next section. In addition, the randomized approach is more
memory efficient because the color of each vertex, oe(v), must be stored, whereas the random num-
bers, ae(v), can be computed when needed. For practical implementation, we use a pseudo-random
number generator to determine ae(v) based solely on the global number of the vertex.
To evaluate the parallel runtime of the PRAM computational model, we assume that the mesh
has been generated for the finite element or finite volume solution of a physical model. The graph
of these meshes is local, and the edges connect vertices that are physically close to each other. In
general, the maximum degree of any vertex in such a mesh is bounded independent of the size of
the system. Given the local nature of the graph, and the assumption that each vertex is assigned
a unique independent random number ae(v), we have that the expected number of independent sets
generated by the while loop in Figure 3.2 is bounded by
EO(logn=log logn);
where n is the number of vertices in the system. This bound is a consequence of Corollary 3.5 in [17].
The maximumtime required to smooth a vertex, t max , is also bounded because t
and we have the following expected runtime bound.
Lemma 2 The algorithm in Figure 3.2 has an expected runtime under the PRAM computational
model of EO( logS0
log logS0
is the number of vertices initially marked for smoothing.
Proof. Under the assumptions of the PRAM computational model, the running time of the
parallel smoothing algorithm is proportional to the number of synchronized steps multiplied by the
maximum time required to smooth a local submesh at step k. The upper bound on this time is
given by the maximum time t max to smooth any local submesh. For this algorithm, the number of
synchronization steps is equal to the number of independent sets chosen, and from (3.1) the expected
number of these is EO( logS0
loglogS0
3.2. Practical Implementation on Distributed Memory Computers. For practical implementation
on a distributed memory computer, we assume that the number of vertices is far greater
than the number of processors, and we modify the PRAM algorithm accordingly. We assume that
vertices are partitioned into disjoint subsets V j and distributed across the processors so that processor
owns V j . Based on the partitioning of V , the elements of the mesh are also distributed to the
processors of the parallel computer.
Given that each processor owns a set of vertices rather than just one, as was the case in the
PRAM model, we choose the independent sets according to a slightly different rule from that used in
Figure
3.2. The independent set I from S is chosen according to the rule: v i 2 I if for each incident
vertex we have that v j 62 S, This modified rule allows two vertices
that are owned by the same processor to be smoothed in the same step.
Because the vertex locations are distributed across many processors that do not share a common
memory, we must add a communication step to the algorithm given in Figure 3.2. This communication
is asynchronous, requiring no global synchronization. 2 After each independent set is smoothed,
we communicate the new vertex locations to processors containing vertices in adj(I) before smoothing
the next independent set of vertices. We now show that this additional step ensures that the
practical algorithm avoids the synchronization problems mentioned at the beginning of the section
and that incident vertex information is correct at each step in the algorithm.
Lemma 3 Vertex information is correctly updated during the execution of the parallel smoothing
algorithm.
Proof. The proof is by induction. We assume that the initial incident vertex location is correct
and that the incident vertex location is correct following step k \Gamma 1. If the position of vertex v i is
adjusted at step k, by the properties of the independent set none of its incident vertices v j are being
adjusted. Thus, following step k of the parallel smoothing algorithm the incident vertices can be
notified of the repositioning of vertex v i and given the new location.
We note that finding I requires no processor communication because each processor stores
incident vertex information. Communication of the random numbers is not necessary if the seed given
the pseudo-random number generator to determine ae(v i ) is based solely on the global numbering i.
Thus, the only communication required in the practical algorithm is the notification of new vertex
positions to processors containing nonlocal incident vertices and the global reduction required to
check whether S k is empty.
4. Experimental Results. To illustrate the performance of the parallel smoothing algorithm
in both two and three dimensions, we consider two finite-element applications: (1) a scalar Poisson
problem with a Gaussian point charge source on a circular domain (PCHARGE), and (2) a linear
elasticity problem (ELASTIC). The upper right quadrant of the domain for the two-dimensional
elasticity problem is shown in Figure 4.1. The three-dimensional test cases are both solved on a
regularly shaped, six-sided solid. The meshes for these problems are generated from a coarse mesh
Global synchronization is expensive on practical distributed memory architectures
by adaptive refinement, where elements are refined by Rivara's bisection algorithm. The refinement
indicator function is based on local energy norm estimates. The parallel adaptive refinement algorithm
and the test problems are described in more detail in [19]. The meshes are partitioned by
using the unbalanced recursive bisection (URB) algorithm, which strives to minimize the number of
processor neighbors and ensure that vertices are equally distributed [19].
For each case we compare two different smoothing approaches: one using the optimization-based
smoothing approach (Optimization-based) and one using a combined Laplacian/optimization
technique (Combined) [10]. For the combined approach, we use Laplacian smoothing as a first step
and accept the new grid point position whenever the quality of the incident elements is improved.
If the quality of the incident elements exceeds a user-defined threshold (30
[10]), the algorithm terminates; otherwise, optimization-based smoothing is performed in an attempt
to further improve the mesh. The quality measure used in all cases is to maximize the minimum
sine of the angles (dihedral angles in 3D) which eliminates extremal angles near 0
the measures considered in [10] (max/min angle and max/min cosine), this measure produced the
highest quality meshes at about the same computational cost. For all test cases considered in this
paper, we perform two smoothing sweeps over the mesh grid points. Vertices are maintained in a
queue and are processed in order.
To illustrate the qualitative effect of mesh smoothing, we present in Figure 4.1 results for the
optimization-based approach described in [11] for the two-dimensional elasticity problem. The mesh
on the left shows the initial mesh after a series of refinement steps. The global minimum angle in
this mesh is 11:3 o and the average minimum element angle is 35:7 . The initial edges from the coarse
mesh are still clearly evident after many levels of refinement. By contrast, the mesh on the right
was obtained by smoothing the grid point locations after each refinement step. The bisection lines
are no longer evident and the elements in the mesh are less distorted. The global minimum angle
in this mesh is 21:7 o and the average minimum element angle is 41:1
Fig. 4.1. Typical smoothing results for the optimization-based approach on the two-dimensional elasticity prob-
lem. The mesh on the left shows refinement without smoothing, and the mesh on the right shows the results of
interleaving smoothing with refinement.
The experiments described in this section are designed to examine the scalability of the parallel
smoothing algorithm. Therefore, for each problem we have adjusted the element error tolerances
so that the number of vertices per processor remains roughly constant as the number of processors
is varied. To show the scalability of both the two- and three-dimensional algorithms, we ran all
four test cases on 1-64 processors of an IBM SP system with SP3 thin nodes and a TB3 switch.
To examine the effect of different architectures on the algorithm, we also ran the two-dimensional
test cases on a network of 12 SPARC Ultras connected via an ATM network. Message passing
was accomplished by using the MPICH implementation of MPI, in particular, the p4 device on the
SPARC Ultra ATM network and the MPL device on the IBM SP [14].
Table
Smoothing results for the 2D problems for the IBM SP
Optimization-based Combined
Max. Max. Vtx Max. Vtx
Number Number Total Smooth Smoothed Smooth Smoothed
of Local Number Time per Time per
Procs. Vtx Vtx (sec) Second (sec) Second
48 10384 498379 12.5 830.7 5.71 1818
48 4198 201392 4.10 1023 1.60 2623
In
Table
4.1 we give the experimental results for both the optimization-based and the combined
smoothing techniques for the two-dimensional test cases on the IBM SP. For each of the different
numbers of processors used, we show the maximum number of vertices assigned to a processor and
the total number of vertices in the final mesh. The maximum smoothing time is the longest time
taken by a processor to perform two smoothing passes through all the mesh vertices. The vertices
smoothed per second is the average rate per processor that vertices are smoothed; if the smoothing
algorithm scaled perfectly, these numbers would remain constant.
As expected, the combined approach obtains a much higher average rate of smoothing for both
applications because the more computationally expensive optimization procedure is performed for
only a subset of the mesh vertices. The average smoothing rates of the two applications are different
because the amount of work required to smooth the two meshes is different. For the point charge
problem, the average vertex smoothing rate slowly decreases as the number of processors increases for
both smoothing techniques. For the elasticity problem, the quality of the meshes varies significantly
as the number of processors change, resulting in a nonmonotonic change in the smoothing rate for the
combined approach. For example, on one processor 16.5 percent of the vertices require optimization-based
smoothing, whereas on four processor only 10 percent require optimization-based smoothing.
The number of vertices assigned to each processor is roughly equal, thereby implying that the
variation in the smoothing rate is due to primarily to two factors: (1) an increasingly unbalanced load
caused by the varying computational cost required to smooth each local submesh; and (2) increased
communication costs and implementationoverhead associated with the parallel smoothing algorithm.
Let T i be the time required to compute the new locations of the vertices owned by processor P i , and
let O i be the time associated with communication costs and implementation overhead on processor
. The time T i should be thought of as the time required to smooth the vertices once the local
subproblems have been constructed and does not include any overhead associated with determining
the adjacency set of the vertex. To quantify these effects on the average smoothing rate, we define
the following:
ffl Work Load Imbalance Ratio-the maximum time required to compute the new locations of
the vertices on a processor divided by the average time:
ffl Efficiency-the maximum amount of time required to compute the new locations of the
vertices on a processor divided by the maximum time including overhead costs:
\Theta 100:
We note that the implementation overhead costs O i include such computations as setting up
the adjacency information for the local submeshes and determining independent sets. Thus,
even for the sequential case, there is overhead associated with global computations, and
the efficiency should be thought of as a percentage of the time solving the local smoothing
problems. Therefore, a good parallel implementation will have nearly constant efficiency,
indicating that little additional overhead is associated with parallelism.
For these quantities, a value of I = 1:0 implies that the processors are perfectly balanced, and a
value of implies that no overhead costs are associated with the sequential or parallel
algorithm.
The work load imbalance ratios and parallel efficiencies corresponding to the test cases in Table
4.1 are given in Table 4.2. As the number of processors increases, the work load stays roughly
balanced for 1-8 processors and then becomes increasingly unbalanced. This is especially true for
the combined approach where the work load imbalance ratio increases to 1.7 on 64 processors for both
test cases. The larger imbalance associated with the combined approach results from the fact that
some processors are required to do more optimization-based smoothing than others. The parallel
efficiency calculation takes this imbalance into account, and the efficiencies for the optimization-based
and combined approaches, EO and EC , remain roughly constant with respect to P . We conclude
that the parallel algorithms scale well despite the increasing imbalance in work load. In general,
the efficiency of the optimization-based approach is higher than that of the combined approach
because the higher computational cost of each smoothing step better amortizes the overhead costs.
The numbers are not monotonic because of the varying meshes and corresponding work loads for
different numbers of processors.
Performance of the parallel smoothing algorithm could be improved by repartitioning the mesh
to account for the imbalance in the work load. However, this approach is not practical in most
applications for which smoothing is only a small portion of the overall solution process. It would not
be computationally efficient to repartition the mesh just for mesh smoothing. The efficiency results
show that the parallel algorithm is performing well even though the partitioning is determined for
other aspects of the solution process.
Work load imbalance and parallel efficiency of the parallel smoothing algorithm for the two-dimensional test
cases on the IBM SP
Optimization-based Combined
Number Max. Max. Max. Max.
of Total Smooth IO EO% Total Smooth IC EC%
Procs. Time Time Time Time
4 8.26 7.8 1.1 94.4 2.87 2.4 1.2 83.6
8 7.72 7.3 1.1 94.5 2.78 2.3 1.1 82.7
48 12.5 12 1.4 96.0 5.71 5.0 1.7 87.5
4 2.66 2.5 1.0 93.9 1.15 .94 1.0 81.7
8 2.67 2.4 1.0 89.8 1.24 .98 1.2 79.0
48 4.10 3.7 1.5 90.2 1.60 1.4 1.8 87.5
In
Table
4.3, we give the number of vertices and average vertex smoothing rates for both smoothing
techniques applied to the three-dimensional application problems. The cost of smoothing in
three-dimensions is roughly ten times the two-dimensional cost. This increase in cost results from
a roughly fivefold increase in the number of function evaluations required for each vertex due to
the higher vertex degree. In addition, each function evaluation is approximately twice as expensive
in 3D as in 2D. The same trends that are evident in the two-dimensional test cases are apparent
in the three-dimensional test cases. In particular, the combined approach is roughly two to three
times faster than the optimization-based approach. The average smoothing rates slowly decrease as
a function of the number of processors. The work load imbalance and efficiency results are given
in
Table
4.4. Again we see that the combined approach tends to produce a more imbalanced load
as the number of processors increases and that the optimization-based smoothing approach is more
efficient than the combined approach because of the higher computational cost. For optimization-based
smoothing the efficiency is a slowly decreasing function of the number of processors for all the
test cases considered here. In contrast, the efficiency results for the combined approach are slightly
more variable because of differing ratios of optimization-based smoothing to Laplacian smoothing.
The roughly constant efficiencies demonstrate that the algorithm scales well despite the imbalance
in the work load.
In
Figure
4.2, we graphically summarize the results for the two- and three-dimensional test cases
on the IBM SP and show the average rate of vertices smoothed and the efficiency for each test set
and smoothing technique.
We now show that the parallel algorithm achieves roughly the same results whether run in parallel
or sequentially for the two-dimensional elasticity problem. In Table 4.5 we show test case results
for a single mesh containing 76118 vertices and an initial minimum angle of 5.90
sors. Both smoothing techniques improved the minimum angle to roughly 13 o . The column labeled
Smoothing results for the 3D problems for the IBM SP
Optimization-based Combined
Max. Max. Vtx Max. Vtx
Number Number Total Smooth Smoothed Smooth Smoothed
of Local Number Time per Time per
Procs. Vtx Vtx (sec) Second (sec) Second
48 6377 305414 72.81 87.58 25.32 251.9
48 4121 196332 36.26 113.6 17.16 240.1
Time/Call gives the maximum average time to smooth each local submesh across the processors.
This time is constant for both techniques on 1-8 processors. The numbers slightly increase on 16 and
processors because of an increase in work on one of the processors. This work increase is clearly
reflected for the combined approach by the maximum percentage of cells that require optimization
on a processor. This percentage increases from approximately 10 percent on 1-8 processors to 14.21
and 20.79 percent on 16 and 32 processors, respectively.
Finally, we show that that the parallel smoothing algorithm is scalable for the two-dimensional
application problems on a switched ATM-connected network of SPARC Ultras. In Table 4.6 we show
the number of vertices and average smoothing rates for 1-12 processors. The average rate results are
more sporadic for the ATM network than they were for the IBM SP, but the same general trends are
evident. In particular, the parallel smoothing algorithm effectively handles the higher the message
startup latencies and lower bandwidth on the ATM network and delivers scalable performance.
5. Conclusions. In this paper we have presented a parallel algorithm for a class of local mesh
smoothing techniques. Numerical experiments were performed for two of the algorithms mentioned
in Section 2, and the parallel framework presented here is suitable for use with all of those techniques.
Theoretical results show that the parallel algorithm has a provably fast parallel runtime bound under
a PRAM computational model. We presented a variant of the PRAM algorithm implemented on
distributed memory computers, and proved its correctness. Numerical experiments were performed
for an optimization-based smoothing technique and a combined Laplacian/optimization-based technique
on two very different distributed memory architectures. These results showed that the parallel
smoothing algorithm scales very well despite the variance in processor work load associated with
smoothing their individual submeshes.
Work load imbalance ratios and efficiency the optimization-based and combined smoothing techniques for the
three-dimensional test cases on the IBM SP
Optimization-based Combined
Number Max. Max. Max. Max.
of Total Smooth IO EO% Total Smooth IC EC%
Processors Time Time Time Time
48 72.81 69 1.5 94.7 25.32 20 1.3 78.8
28 1.1 86.3 13.72 11 1.3 80.1
48 36.26
Table
Mesh quality and smoothing information for the parallel algorithms on the IBM SP
Optimization Combined
Number Pre-
of Min Min Call Min Call Percent
Processors Angle Angle (ms) Angle (ms) Optimized
Acknowledgments
. The work of the first and third authors is supported by the Mathematical,
Information, and Computational Sciences Division subprogram of the Office of Computational and
Technology Research, U.S. Department of Energy, under Contract W-31-109-Eng-38. The work of
the second author is supported by National Science Foundation grants ASC-9501583, CDA-9529459,
and ASC-9411394.
--R
Optimal point placement for mesh smoothing
A method of the improvement of 3D solid
On the angle condition in the finite element method
PLTMG: A Software Package for Solving Ellipitc Parital Differential Equations
Mesh smoothing using a posteriori error estimates
Optismoothing: An optimization-driven approach to mesh smoothing
Optimization of tetrahedral meshes
Incremental topological flipping works for regular triangulations
Communications and Applied Numerical Meth- ods
A comparison of tetrahedral mesh improvement techniques
An efficient parallel algorithm for mesh smoothing
Condition of finite element matrices generated from nonuniform meshes
Computers and Intractability
Portable Parallel Programming with the Message-Passing Interface
A parallel graph coloring heuristic
A new mesh generation scheme for arbitrary planar domains
A simple parallel algorithm for the maximal independent set problem
A constrained optimization approach to finite element mesh smooth- ing
Automatic three-dimensional mesh generation by the finite octree technique
--TR
--CTR
Qiang Du , Max Gunzburger, Grid generation and optimization based on centroidal Voronoi tessellations, Applied Mathematics and Computation, v.133 n.2-3, p.591-607, 15 December 2002
Henning Biermann , Ioana Martin , Fausto Bernardini , Denis Zorin, Cut-and-paste editing of multiresolution surfaces, ACM Transactions on Graphics (TOG), v.21 n.3, July 2002
ShuLi Sun , JianFei Liu, An efficient optimization procedure for tetrahedral meshes by chaos search algorithm, Journal of Computer Science and Technology, v.18 n.6, p.796-803, November | finite elements;parallel algorithms;mesh smoothing;parallel computing;unstructured meshes |
327833 | An Efficient Algorithm for a Bounded Errors-in-Variables Model. | We pose and solve a parameter estimation problem in the presence of bounded data uncertainties. The problem involves a minimization step and admits a closed form solution in terms of the positive root of a secular equation. | Introduction
. Parameter estimation in the presence of data uncertainties
is a problem of considerable practical importance, and many estimators have been
proposed in the literature with the intent of handling modeling errors and measurement
noise. Among the most notable is the total least-squares method [1, 2, 3, 4],
also known as orthogonal regression or errors-in-variables method in statistics and
system identification [5]. In contrast to the standard least-squares problem, the TLS
formulation allows for errors in the data matrix. Its performance may degrade in
some situations where the effect of noise and uncertainties can be unnecessarily over-
emphasized. This may lead to overly conservative results.
Assume A 2 R m\Thetan is a given full rank matrix with m n,
vector, and consider the problem of solving the inconsistent linear system A"x b
in the least-squares sense. The TLS solution assumes data uncertainties in A and
proceeds to correct A and b by replacing them by their projections, "
A and " b, onto a
specific subspace, and by solving the consistent linear system of equations "
The spectral norm of the correction
in the TLS solution is bounded by the
smallest singular value of
\Theta
A b
. While this norm might be small for vectors b that
are close enough to the range space of A, it need not always be so. In other words,
addresses: shiv@ece.ucsb.edu, golub@sccm.stanford.edu, mgu@math.ucla.edu, and
sayed@ee.ucla.edu. The works of A. H. Sayed and S. Chandrasekaran were supported in part by the
National Science Foundation under Award numbers MIP-9796147, CCR-9732376, and CCR-9734290,
respectively.
the TLS solution may lead to situations in which the correction in A is unnecessarily
large. Consider, for example, a situation in which the uncertainties in A are very
small and, say, A is almost known exactly. Assume further that b is far from the
range space of A. In this case, it is not difficult to visualize that the TLS solution will
need to modify
may therefore end up with an overly corrected
approximant for A, despite the fact that A is almost exact.
These facts motivate us to introduce a new parameter estimation formulation
with a bound on the size of the allowable correction to A. The solution of the new
formulation turns out to involve the minimization of a cost function in an "indefinite"
metric, in a way that is similar to more recent works on robust (or H 1 ) estimation
and filtering (e.g., [6, 7, 8, 9]). However, the cost function considered in our work is
more complex and, contrary to robust estimation where no prior bounds are imposed
on the size of the disturbances, the problem of this paper shows how to solve the
resulting optimization problem in the presence of such constraints. A "closed" form
solution to the new optimization problem is obtained in terms of the positive root of
a secular equation.
The solution method proposed in this paper proceeds by first providing a geometric
interpretation of the new optimization problem, followed by an algebraic derivation
that establishes that the optimal solution can in fact be obtained by solving a related
"indefinite" regularized problem. The regression parameter of the regularization step
is further shown to be obtained from the positive root of a secular equation. The solution
involves an SVD step and its computational complexity amounts to O(mn 2 +n 3 ),
where n is the smaller matrix dimension. A summary of the problem and its solution
is provided in Sec. 4.7 at the end of this paper.
2. Problem Statement. Let A 2 R m\Thetan be a given matrix with m n and let
be a given nonzero vector, which are assumed to be linearly related via an
unknown vector of parameters x 2 R n ,
The vector v explains the mismatch between Ax and the given vector (or
b.
We assume that the "true" coefficient matrix is A + ffiA, and that we only know
an upper bound on the perturbation ffiA:
with j being known, and where the notation k \Delta k 2 denotes the 2\Gammainduced norm of a
matrix argument (i.e., its maximum singular value) or the Euclidean norm of a vector
argument. We pose the following optimization problem.
Problem 1. Given A 2 R m\Thetan , with m n, nonnegative real
number j. Determine, if possible, an "
x that solves
min
It turns out that the existence of a unique solution to this problem will require
a fundamental condition on the data (A; b; j), which we describe further ahead in
BOUNDED ERRORS-IN-VARIABLES MODEL 3
Lemma 3.1. When the condition is violated, the problem will become degenerate.
In fact, such existence and uniqueness conditions also arise in other formulations of
estimation problems (such as the TLS and H1 problems, which will be shown later
to have some relation to the above optimization problem). In the H1 context, for
instance, similar fundamental conditions arise, which when violated indicate that the
problem does not have a meaningful solution (see, e.g., [6, 7, 8, 9]).
2.1. Intuition and Explanation. Before discussing the solution of the optimization
problem we formulated above, it will be helpful to gain some intuition into
its significance.
Intuitively, the above formulation corresponds to "choosing" a perturbation ffiA,
within the bounded region, that would allow us to best predict the right-hand side
b from the column span of Comparing with the total-least-squares (TLS)
formulation, we see that in TLS there is not an a priori bound on the size of the
allowable perturbation ffiA. Still, the TLS solution finds the "smallest" ffiA (in a
Frobenius norm sense) that would allow to estimate b from the column span of
ffiA), viz., it solves the following problem [3]:
min
Nevertheless, although small in a certain sense, the resulting correction ffiA need not
satisfy an a priori bound on its size. The problem we formulated above explicitly
incorporates a bound on the size of the allowable perturbations. We may further add
that we have addressed a related estimation problem in the earlier work [10], where
we have posed and solved a min-max optimization problem; it allows us to guarantee
optimal performance in a worst-case scenario. Further discussion, from a geometric
point of view, of this related problem and others, along with examples of applications
in image processing, communications, and control, can be found in [11].
Returning to (2.3), we depict the situation in Fig. 2.1. Any particular choice for
x would lead to many residual norms,
one for each possible choice of ffiA. A second choice for "
x would lead to other residual
norms, the minimum value of which need not be the same as the first choice. We
want to choose an estimate "
x that minimizes the minimum possible residual norm.
Fig. 2.1. Two illustrative residual-norm curves.
2.2. A Geometric Interpretation. The optimization problem (2.3) admits
an interesting geometric formulation that highlights some of the issues involved in its
solution. We explain this by considering a scalar example. For the vector case see
[11].
Assume we have a unit-norm vector b, kbk that A is simply a column
vector, say a, with j 6= 0. Now problem (2.3) becomes
min
x
min
This situation is depicted in Fig. 2.2. The vectors a and b are indicated in thick
black lines. The vector a is shown in the horizontal direction and a circle of radius j
around its vertex indicates the set of all possible vertices for a
a
Fig. 2.2. Geometric construction of the solution for a simple example.
For any " x that we pick, the set f(a+ ffia)"xg describes a disc of center a"x and radius
j"x. This is indicated in the figure by the largest rightmost circle, which corresponds
to a choice of a positive "
x that is larger than one. The vector in f(a + ffia)"xg that
is the closest to b is the one obtained by drawing a line from b through the center
of the rightmost circle. The intersection of this line with the circle defines a residual
vector r 3 whose norm is the smallest among all possible residual vectors in the set
ffia)"xg.
Likewise, if we draw a line from b that passes through the vertex of a (which is
the center of the leftmost circle), it will intersect the circle at a point that defines a
residual vector r 1 . This residual will have the smallest norm among all residuals that
correspond to the particular choice "
More generally, for any "
x that we pick, it will determine a circle and the corresponding
smallest residual is obtained by finding the closest point on the circle to b.
This is the point where the line that passes through b and the center of the circle
intersects the circle on the side closer to b.
We need to pick an "
x that minimizes the smallest residual norm. The claim is that
we need to proceed as follows: we drop a perpendicular from b to the upper tangent
line denoted by ' 2 . This perpendicular intersects the horizontal line in a point where
BOUNDED ERRORS-IN-VARIABLES MODEL 5
we draw a new circle (the middle circle) that is tangent to both ' 1 and ' 2 . This circle
corresponds to a choice of " x such that the closest point on it to b is the foot of the
perpendicular from b to ' 2 . The residual indicated by r 2 is the desired solution; it
has the minimum norm among the smallest residuals.
3. An Equivalent Minimization Problem. To solve (2.3), we start by showing
how to reduce it to an equivalent problem. For this purpose, we note that
The lower bound on the right-hand side of the above inequality is a non-negative
quantity and, therefore, the least it can get is zero. This will in turn depend on how
big or how small the value of kffiAk 2 can be.
For example, if it happens that for all vectors " x we always have
then we conclude, using the triangle inequality of norms, that
It then follows from (3.1) that, under the assumption (3.2), we obtain
It turns out that condition (3.2) is the main (and only) case of interest in this pa-
per, especially since we shall argue later that a degenerate problem arises when it is
violated. For this reason, we shall proceed for now with our analysis under the assumption
(3.2) and shall postpone our discussion of what happens when it is violated
until later in this section.
Now the lower bound in (3.1) is in fact achievable. That is, there exists a ffiA for
which
To see that this is indeed the case, choose ffiA as the rank one matrix
This leads to a vector ffiA that is collinear with the vector (A"x \Gamma b). [Note that "
x in
the above definition for ffiA o cannot be zero since otherwise (3.2) cannot be satisfied.
cannot be zero. Hence, ffiA o is well-defined.]
We are therefore reduced to the solution of the following optimization problem.
Problem 2. Consider a matrix A 2 R m\Thetan , with m n, a vector b
nonnegative real number j, and assume that for all vectors " x it holds that
Determine, if possible, an " x that solves
x
6 CHANDRASEKARAN, GOLUB, GU, AND SAYED
3.1. Connections to TLS and H1-Problems. Before solving Problem (3.4),
we elaborate on its connections with other formulations in the literature that also
attempt, in one way or another, to take into consideration uncertainties and perturbations
in the data.
First, cost functions similar to (3.4) but with squared distances, say
x
for some fl ? 0, often arise in the study of indefinite quadratic cost functions in
robust or H 1 estimation (see, e.g., the developments in [8, 9]). The major distinction
between this cost and the one posed in (3.4) is that the latter involves distance terms
and it will be shown to provide an automatic procedure for selecting a "regularization"
factor that plays the role of fl in (3.5).
Likewise, the TLS problem seeks a matrix ffiA and a vector " x that minimize the
following Frobenius norm:
min
The solution of the above TLS problem is well-known and is given by the following
construction [4][p. 36]. Let foe denote the singular values of A, with oe 1
denote the singular values of the
extended matrix
\Theta
A b
, with
x of
exists and is given by
For our purposes, it is more interesting to consider the following interpretation of
the TLS solution (see, e.g., [9]). Note that the condition oe n+1 ! oe n assures that
n+1 I) is a positive-definite matrix, since oe 2
n is the smallest eigenvalue of
A T A. Therefore, we can regard (3.7) as the solution of the following optimization
problem, with an indefinite cost function,
x
This is a special form of (3.5) with a particular choice for fl. It again involves squared
distances, while (3.4) involves distance terms and it will provide another choice of a fl-
like parameter. In particular, compare (3.7) with the expression (4.4) derived further
ahead for the solution of (3.4). We see that the new problem replaces
n+1 with a
new parameter ff that will be obtained from the positive root of a secular (nonlinear)
equation.
3.2. Significance of the Fundamental Assumption. We shall solve Problem
(3.4) in the next section. Here, we elaborate on the significance of the condition
(3.3). So assume (3.3) is violated at some nonzero point "
x (1) , namely 1
and define the perturbation
2:
violation occurs for some zero "
x (1) this means that we must necessarily have
contradicts our assumption of a nonzero vector b.
BOUNDED ERRORS-IN-VARIABLES MODEL 7
It is clear that ffiA (1) is a valid perturbation since, in view of (3.8), we have kffiA (1) k 2
j. But this particular perturbation leads to
That is, the lower limit of zero is achieved for (ffiA (1) ; x (1) ) and x (1) can be taken as a
solution to (2.3). In fact, there are many possible solutions in this case. For example,
once one such x (1) has been found, an infinite number of others can be constructed
from it. To verify this claim, assume x (1) is a vector that satisfies (3.8), viz., it satisfies
for some ffl 0. Now assume we replace x (1) by x vector ffi to
be determined so as to violate condition (3.3) and, therefore, also satisfy a relation of
the form
If such an x (2) can be found, then constructing the corresponding ffiA (2) as in (3.9)
would also lead to a solution (ffiA
Condition (3.11) requires a choice for the vector ffi such that
But this can be satisfied by imposing the sufficient condition
where the left-hand side is the smallest jk"x (1) while the right-hand side
is the largest kA("x (1) get. Solving for kffik 2 we see that any vector ffi
that satisfies
will lead to a new vector "
x (2) that also violates (3.3). Consequently, given any single
nonzero violation " x (1) , many others can be obtained by suitably perturbing it.
We shall not treat the degenerate case in this paper (as well as the case when
(3.8) is violated only with equality). We shall instead assume throughout that the
fundamental condition (3.3) holds. Under this assumption, the problem will turn out
to always have a unique solution.
3.3. The Fundamental Condition for Non-Degeneracy. The fundamental
condtition (3.3) needs to be satisfied for all vectors "
x. This can be restated in terms of
conditions on the data (A; b; To see this, note that (3.3) implies, by squaring,
that we must have
That is, the quadratic form J("x) that is defined on the left hand-side of (3.13) must
be negative for any value of the independent variable "
x. This is only possible if:
(i) The quadratic form J("x) has a maximum with respect to " x, and
8 CHANDRASEKARAN, GOLUB, GU, AND SAYED
(ii) the value of J("x) at its maximum is negative.
The necessary condition for the existence of a unique maximum (since we have a
quadratic cost function) is
(j
which means that j should satisfy
Under this condition, the expression for the maximum point " xmax of J("x) is
Evaluating J("x) at "
xmax we obtain
Therefore, the requirment that J("x max ) be negative corresponds to
Lemma 3.1. Necessary and sufficient conditions in terms of (A; b; j) for the
fundamental relation (3.3) to hold are:
(j
and
Note that for a well-defined problem of the form (2.3) we need to assume j ? 0
which, in view of (3.17), means that A should be full rank so that oe min (A) ? 0. We
therefore assume, from now on, that
A is full rank :
We further introduce the singular value decomposition (SVD) of A:
where U 2 R m\Thetam and V 2 R n\Thetan are orthogonal, and
nal, with
being the singular values of A. We further partition the vector U T b into
m\Gamman .
While solving the minimization problem (3.4), we shall first assume that the two
smallest singular values of A are distinct and, hence, satisfy
Later in Sec. 4.6 we consider the case in which multiple singular values can occur.
BOUNDED ERRORS-IN-VARIABLES MODEL 9
4. Solving the Minimization Problem. To solve (3.4), we define the non-convex
cost function
which is continuous in " x and bounded from below by zero in view of (3.3). A minimum
point for L("x) can only occur at 1, at points where L("x) is not differentiable,
or at points where its gradient, 5L("x), is 0. In particular, note that L("x) is not
differentiable only at " and at any " x that satisfies A"x
satisfying are excluded by the fundamental condition (3.3). Also, we can
rule out "
lim
Now at points where L("x) is differentiable, the gradient of L is given by
where we have introduced the positive real number
In view of the fundamental condition (3.3) we see that the value of ff is necessarily
larger
Likewise, the Hessian of L is given by
3:
The critical points of L("x) (where the gradient is singular) satisfy
or, equivalently,
Equations (4.1) and (4.4) completely specify the stationary points of L("x). They
provide two equations in the unknowns (ff; "
x). We can use (4.4) to eliminate "
x from
(4.1) and, hence, obtain an equation in ff. Once we solve for ff, we can then use
equation (4.4) to determine the solution " x. The equation we obtain for ff will in
general be a nonlinear equation and the desired ff will be a root of it. The purpose
of the discussion in the sequel is to show where the root ff that corresponds to the
global minimizer of L lies and how to find it.
We know from (4.2) that ff ? j 2 . We shall show soon that we only need to look
for the solution ff in the interval (j
4.5. [Further analysis later in
the paper will in fact show that ff lies within the smaller interval (j
Hence, the
coefficient matrix in (4.4) is always nonsingular except for
or
n .
In summary, we see that the candidate solutions "
x to our minimization problem
are the following:
which is a point at which L is not differentiable. We shall show that
can not be a global minimizer of L.
2. to (4.1) and (4.4) when In this
case, we will see that ff can only lie in the open interval (j
3. Solutions (ff; "
x) to (4.1) and (4.4) when singular. We will see
that this can only happen for the choices
n .
The purpose of the analysis in the sequel is to rule out all the possibilities except
for one as a global minimum for L. In loose terms, we shall show that in general a
unique global minimizer (ff; "
exists and that the corresponding ff lies in the open
interval (j
Only in a degenerate case, the solution is obtained by taking
and by solving (4.4) for " x. In other words, the global minimum will be obtained from
the stationary points of L, which is why we continue to focus on them.
The final statement of the solution is summarized in Sec. 4.7.
4.1. Positivity of the Hessian Matrix. We are of course only interested in
those critical points of L that are candidates for local minima. Hence, the Hessian
matrix at these points must be positive semi-definite.
ff"x at a critical point, we conclude from equation (4.3) that
Now observe that the second term is a symmetric rank-1 matrix that is also positive-semidefinite
since ff ? j 2 . Hence, in view of the Cauchy interlacing theorem [3]
the smallest eigenvalue of \DeltaL("x) will lie between the two smallest eigenvalues of the
. This shows that the value of ff can not exceed oe 2
since otherwise the two smallest eigenvalues of
will be nonpositive and
the Hessian matrix will have a nonpositive eigenvalue.
The above argument explains why we only need to look for ff in the interval
4.2. Solving for "
x and the Secular Equation. Given that we only need consider
values of ff in the interval (j
we can now solve for "
x using (4.1) and (4.4).
Two cases should be considered since the coefficient matrix may be singular
for
n or
.
BOUNDED ERRORS-IN-VARIABLES MODEL 11
I. The case ff 62 foe 2
g. From Eq. (4.4) we see that among the ff's in the interval
(4.5), as long as ff is not equal to either oe 2
n or oe 2
, the critical point " x associated
with ff is given uniquely by
.
Moreover, from equations (4.1) and (4.4) we see that
0:
Substituing for "
x and using the SVD of A to simplify we obtain the equivalent expressions
Clearly the roots of G(ff) that lie in the interval (j
will correspond to critical
points that are candidates for local minima. Therefore we will later investigate the
roots of G(ff).
II. The case
n or
. From Eq. (4.4) we see that
n or
can correspond to a critical point " x, only if either u T
denote the columns of U that correspond to
i.e., the last two columns of U .
We only show here how to solve for "
x when
n . The technique for
is similar.
From equation (4.4) it is clear that
n is a candidate for a critical point if,
and only if, u T
In this case the associated "
x's (there may be more than one)
satisfy the equations
and
Now define y consider the following partitionings:
y
The quantities " x and y define each other uniquely and kyk
It follows from equation (4.8) that
I
Substituting this into equation (4.9) we have
oe 4
I
Solving for y 2
n we obtain
I
where we introduced the function
Comparing with the definition (4.7) for G(ff) we see that
Note that
G(ff) if u T
which is the case when
n is a possibility.
Therefore the possible values of yn are
It follows that
are the necessary conditions for
n to correspond to a stationary double point
at:
I
The global minimum. The purpose of the analysis in the following sections is to
show that in general the global minimum is given by (4.6) with the corresponding ff
lying in the interval (j
a root of the secular equation G(ff) in (4.7) does
not exist in the open interval (j
n ), we shall then show that the global minimum
is given by (4.15).
4.3. The Roots of the Secular Equation. We have argued earlier in (4.5)
that the roots ff of G(ff) that may lead to global minimizers "
x can lie in the interval
(j
We now determine how many roots can exist in this interval and later show
that only the root lying in the subinterval (j
corresponds to a global minimum
when it exists. Otherwise, we have to use (4.15). The details are given below.
To begin with, we establish some properties of G(ff). From the non-degeneracy
assumption (3.3) on the data it follows that G(j 2 Moreover, from the expression
(4.7) for G(ff) we see that it has a pole at oe 2
n provided that u T
b is not equal to zero
in which case
lim
BOUNDED ERRORS-IN-VARIABLES MODEL 13
Now observe from the expression for the derivative of G(ff),
that G 0 (ff) ? 0 for
n . We conclude from these facts that G(ff) has exactly
one root in the open interval (j
Actually, since also
lim
we conclude that when u T
exactly one root in the interval (0; oe 2
that this root lies in the subinterval (j
When u T
the function G(ff) does not have a pole at oe 2
does not hold. However, by using the still valid fact that lim ff!0+
that G 0 (ff) ? 0 over the larger interval (0; oe 2
we conclude the following:
1. If u T
hence, G(ff) has a unique root
in the interval (0; oe 2
2. If we also have u T
can have at most one root in the interval
). The root may or may not exist.
What about the interval (oe 2
We now establish that G(ff)
can have at most two roots in this interval. For this purpose, we first observe that
both G(ff) and ff 2 G(ff) have the same number of roots in (oe 2
added a double root at 0). Next we compute the first derivative of ff 2 G(ff) obtaining
d
dff
Using this we compute the second derivative obtaining
It is clear that the second derivative is strictly positive for non-negative ff. From this
we can conclude that ff 2 G(ff) and, hence, G(ff), have at most two zeros in (oe 2
We have therefore established the following result.
Lemma 4.1. The following properties hold for the function G(ff) defined in (4.7):
1. When u T
the function G(ff) has a single root in the interval (j
and at most two roots in the interval (oe 2
We label them as:
2. When u T
the function G(ff) has a unique root in the
interval (j
3. When u T
the function G(ff) has at most one root in the
interval (j
It is essential to remember that the roots ff 2 and ff 3 may not exist, though they
must occur as a pair (counting multiplicity) if they exist.
We now show that ff 3 cannot correspond to a local minimum if ff
the two roots in the interval (oe 2
are distinct. Indeed, assume ff 2 and ff 3 exist.
14 CHANDRASEKARAN, GOLUB, GU, AND SAYED
Then from the last lemma it must hold that u T
must also exist. Hence,
we must have G 0 (ff
If we assume ff 2 ! ff 3 , we shall use the fact that G 0 (ff 3 to show that ff 3 can
not correspond to a local minimum solution "
x. This will be achieved by showing that
the determinant of the Hessian of L("x) at ff 3 is negative. For this we note that
det
x
x
Introduce for convenience, the shorthand notation:
Then, using the SVD of A,
det
Evaluating at noting that
conclude that det(L) at the " x corresponding to ff 3 is negative. Hence, ff 3 cannot
correspond to a local minimum.
4.4. Candidates for Minima. We can now be more explicit about the candidates
for global minimizers of L, which we mentioned just prior to Sec. 4.1:
which corresponds to a point where L is not differentiable.
2. If ff 1 exists then the corresponding "
x is a candidate. Recall that ff is guaranteed
to exist if u T
It may or may not exist otherwise.
3. If u T
exists then the corresponding "
x is a candidate.
4. If u T
x associated with
n is a candidate.
5. If u T
x associated with
is a candidate.
We shall show that 2) is the global minimizer when ff 1 exists. Otherwise, 4) is the
global minimizer.
We start by showing that "
can not be the global minimizer of L. We divide
our argument into two cases: necessarily have
The
is the error vector due to projecting b onto u i . Hence,
and we obtain
BOUNDED ERRORS-IN-VARIABLES MODEL 15
We conclude that L(0) ? L(z), and " cannot correspond to a global minimum
Now we consider the case when b that by the non-degeneracy assumption
we must have b 2 6= 0. Now define again Then we can simplify L("x) to
obtain
Choose "
. The following sequence of inequalities then
holds:
Therefore "
can never be the global minimum.
4.5. Continuation Argument. We are now ready to show that if ff 1 exists
then the corresponding " x in (4.6) gives the global minimum. Otherwise, the "
x in
(4.15) that corresponds
n gives a double global minimum. The proof will be by
continuation on the parameter fi \Delta
We use (4.12) to write
We also recall from the definition of
G in (4.11) that it has a similar expression to
that of G in (4.7), except that the pole of G at oe 2
n has been extracted (as shown by
(4.18)). Hence, the derivative of
G has a form similar to that of the derivative of G in
(4.17) and we can conclude that
We continue our argument by considering separately the three cases:
I.
In this case, and because of (4.14),
n can not correspond to
a global minimum. By further noting that
that
using (4.18) we also have that G(ff) ? 0 over
n is either a pole of G(ff) (when u T
(when u T
that G has a unique root in (j
In this case the only contenders for global minima over (0; oe 2
are
. By an analysis similar to the one in Sec. 4.2 for
n it can be shown
that a necessary condition for
to correspond to a global minimum is that
These two conditions imply that we must have G(oe 2
This result is compatible
with the fact that G(ff) ? 0 over (oe 2
This in turn implies
from (4.18) that we must have
This is inconsistent with the facts
Therefore,
can not correspond to a global minimum and we conclude
that the only critical point we need to consider corresponds to the one associated with
the unique root of G(ff) in (j
n ), which must naturally correspond to the global
minimum.
In summary, the solution "
x in (4.6) that corresponds to ff 1 is the global minimum
in this case.
II.
In this case, and because of (4.14),
can correspond to a global
minimum only if which case we also deduce from (4.18) that G(oe 2
G. Hence, oe 2
n is a root of G. By using G(j 2
we conclude that G does not have any other root in (j
Therefore, when the only contenders for global minima over (0; oe 2
are
n and
. For
to correspond to a global minimum we
saw above that we must necessarily have G(oe 2
This is inconsistent with
We thus obtain that the solution "
x in (4.15)
that corresponds to oe 2
n is the global minimizer.
What about the case fi 6= 0? In this case, and because of (4.14),
n can not
correspond to a global minimum. By further noting that
we conclude that
using (4.18) we also have that
n is now a pole of G(ff) and since G(j 2
that G has a unique root in (j
In this
case the only contenders for global minima over (0; oe 2
.
By an analysis similar to the one in case I, we can rule out oe 2
.
In summary, we showed the following when
1. When u T
0, the solution " x in (4.6) that corresponds to ff 1 is the global
minimum.
2. When u T
x in (4.15) that corresponds to oe 2
n is the global
minimum.
III.
This is the most complex situation. Let ! be the largest number
such that oe 2
and G(ff) has no poles in the interval
By the given conditions it is obvious from the form of G(ff) that it has two roots
in (oe 2
sufficiently small fi. Now we find the largest number ffi such that for
all fi in the interval (0; ffi] the function G(ff) has two roots (counting multiplicity) in
We claim that for fi ? ffi there are no roots of G(ff) in (oe 2
To see this we
replace fi in G(ff) by
and observe that the term involving is strictly positive in the interval (oe 2
strictly positive .
We continue our analysis by considering separately two cases:
BOUNDED ERRORS-IN-VARIABLES MODEL 17
We will show that at
the function L has a double
global minimum at
n , and that as fi is increased this double root at
bifurcates into the two roots ff 1 and ff 2 , and that L(ff 1
When
the function
G(ff) has exactly one root in (0; !]. This is because
By using the same proof that we used earlier to
show that ff 3 cannot correspond to a local minimum we can establish that this root
also cannot correspond to minimum. This leaves us with the double stationary points
that we computed in (4.15) and which corresponded to
n . It is an easy matter
to verify, using the formula that both the stationary points
yield the same value for L.
We now allow fi to increase. Let y(fi) denote the stationary point
x corresponding to ff 1 (fi). Also let z(fi) denote the stationary point " x corresponding
to ff 2 (fi). It is easy to see from the form of G(ff) that
lim
whenever
Now we will show that
lim
First we observe that the result is true for i 6= n directly from formulas (4.4)
and (4.10). Next we note that
G(ff) is continuous at
n . Therefore
lim
Now using formula (4.13) it can be verified that
lim
Therefore, L(y(fi)) and L(z(fi)) are continuous on the interval [0; 1), with
L(z(0)). We now compute the derivative of L with respective to fi at a stationary
point. We have already observed that at a stationary point "
x, corresponding to some
ff, the objective function L can be simplified To simplify the
derivation we actually take the derivative of j 2 L 2 , which can be expressed as
for Now we differentiate obtaining
d
Now we obtain an expression for dff=dfi by differentiating ff 2
to fi. Doing so we obtain
dff
Solving this equation for the term involving dff=dfi and substituting it into the above
equation for d
we obtain
d
From this expression we can immediately conclude that the smaller root ff 1 (fi)
decreases the objective function L(y(fi)), as fi increases from 0, and the larger root
increases the value of the objective function L(z(fi)), as fi increases. Since
L(z(0)), we can now conclude that L(y(fi)) L(z(fi)) for all non-negative
fi such that ff 2 (fi) ! oe 2
.
Therefore the choice for global minimum is between y(fi) and the critical points,
if any, corresponding to
. As mentioned before,
can correspond
to a critical point only if the condition (4.19) holds.
From the arguments in Sec. 4.3 we know that G(ff) has at most two roots in
Therefore it follows that under the condition
. This in turn
implies that ff 2 (fi
Using the condition (4.19) and carrying out an analysis similar to that of (4.15),
we can compute the critical point associated with
. From that it is easy to
verify that
lim
denotes the critical point associated with
.
Now from the continuation argument for L it follows that L(y(fi))
Therefore we do not need to consider
as a possibility
for the global minimum.
Furthermore, when fi ? ffi we argued earlier that there are not roots of G(ff) in
the interval (oe 2
Also, from the above argument it follows that
can not
correspond to a global minimum.
In summary, the " x in (4.6) that corresponds to ff 1 is the global minimizer.
4.6. Multiple Singular Values. So far in the analysis we have implicitly ignored
the possibility that oe . We now discuss how to take care of this
possibility. We only need to consider critical points ff situated in the interval (0; oe 2
Let Un denote a matrix with orthonormal columns that span the left singular
subspace associated with the smallest singular value of A. If kU T
it follows from equation (4.4) that
n is not a possibility for a critical point.
Furthermore G(ff) has a single root in (j
must give the global minimum.
If kU T
then either there exists a root of G(ff) in the interval (j
which corresponds to the global minimum, or there is no such root and
will
give rise to multiple global minima all of which can be calculated by the technique
that led to (4.15). The only difference is that
y in equation (4.10) will now denote
BOUNDED ERRORS-IN-VARIABLES MODEL 19
the components of y associated with the right singular vectors perpendicular to the
range space of Vn , where Vn is a matrix with orthonormal columns that span the
right singular subspace of A corresponding to oe n . The proofs of these statements are
similar to the non-multiple singular value case.
4.7. Statement of the Solution of the Optimization Problem. We collect
in the form of a theorem the conclusions of our earlier analysis.
Theorem 4.2. Given A 2 R m\Thetan , with m n and A full rank, a nonzero
positive real number j satisfying j ! oe min (A). Assume further that
\Theta
The solution of the optimization problem:
min
min
can be constructed as follows.
ffl Introduce the SVD of A,
where U 2 R m\Thetam and V 2 R n\Thetan are orthogonal, and
is diagonal, with
being the singular values of A.
ffl Partition the vector U T b into
m\Gamman .
ffl Introduce the secular function
ffl Determine the unique positive root "
ff of G(ff) that lies in the interval (j
If it does not exist then set "
n .
ffl Then
1. If "
n , the solution " x is unique and is given by (4.6) or, equivalently,
2. If "
n and oe n ! oe then two solutions exist that are given by
(4.15). Otherwise, if A has multiple singular values at oe n , then multiple
solutions exist and we can use the same technique that led to (4.15) to
determine "
x as explained in the above section on multiple singular values.
We can be more explicit about the uniqueness of solutions. Assume A has multiple
singular values at oe n and let Un denote the matrix with singular vectors that spans
the left singular subspace of A associated with these singular values:
1. When kU T
n bk 6= 0, the solution " x is unique and it corresponds to a root "
as shown above.
2. When kU T
either an "
exists and the solution " x is unique.
Otherwise, "
n and multiple solutions "
x exist.
5. Restricted Perturbations. We have so far considered the case in which all
the columns of the A matrix are subject to perturbations. It may happen in practice,
however, that only selected columns are uncertain, while the remaining columns are
known precisely. This situation can be handled by the approach of this paper as we
now clarify.
Given A 2 R m\Thetan , we partition it into block columns,
\Theta
and assume, without loss of generality, that only the columns of A 2 are subject to
perturbations while the columns of A 1 are known exactly. We then pose the following
problem:
Given A 2 R m\Thetan , with m n and A full rank, b 2 R m , and a nonnegative real
number
min
x
min
\Theta
If we partition " x accordingly with A 1 and A 2 , say
then we can write
\Theta
Assuming, for any vector ("x
we can follow the argument at the beginning of Sec. 3 to conclude that the minimum
over ffiA 2 is achievable and is equal to
In this way, statement (5.1) reduces to the minimization problem
\Theta
This statement can be further reduced to the problem treated in Theorem 4.2 as
follows. Introduce the QR decomposition of A, say
R 11 R 12
BOUNDED ERRORS-IN-VARIABLES MODEL 21
where we have partitioned R accordingly with the sizes of A 1 and A 2 . Define4
Then (5.2) is equivalent to
min
R 11 R 12
\Gamma4
which can be further rewritten as
min
This shows that once the optimal "
x 2 has been determined, the optimal choice for "
is necessarily the one that annihilates the entry R
That is,
The optimal "
x 2 is the solution of
R 22
This optimization is of the same form as the problem stated earlier in (3.4) with " x
replaced by "
replaced by j 2 , A replaced by
R 22
, and b replaced by
Therefore, the optimal " x 2 can be obtained by applying the result of Theorem 4.2.
Once " x 2 has been determined, the corresponding " x 1 follows from (5.5).
6. Conclusion. In this paper we have proposed and solved a new optimization
problem for parameter estimation in the presence of data uncertainties. The problem
incorporates a priori bounds on the size of the perturbations. It has a "closed" form
solution that is obtained by solving an "indefinite" regularized least-squares problem
with a regression parameter that is determined from the positive root of a secular
equation.
Several extensions are possible. For example, weighted versions with uncertainties
in the weight matrices are useful in several applications, as well as cases with
multiplicative uncertainties and applications to filtering theory. Some of these cases,
in addition to more discussion on estimation and control problems with bounded
uncertainties, can be found in [11, 12, 13, 14].
Acknowledgement
. The authors would like to thank one of the anonymous reviewers
for pointing out a mistake in an earlier version of the paper.
--R
Some modified matrix eigenvalue problems
An analysis of the total least squares problem
Matrix Computations
The Total Least Squares Problem: Computational Aspects and Analysis
Filtering and smoothing in an H 1
Recursive linear estimation in Krein spaces - Part I: Theory
Fundamental inertia conditions for the minimization of quadratic forms in indefinite metric spaces
Parameter estimation in the presence of bounded data uncertainties
Estimation and control in the presence of bounded data uncertainties
"Parameter estimation in the presence of bounded modeling errors,"
"Estimation in the presence of multiple sources of uncertainties with applications"
"Design criteria for uncertain models with structured and unstructured uncertainties,"
--TR | modeling errors;least-squares estimation;total least-squares;secular equation |
327849 | The Design and Use of Algorithms for Permuting Large Entries to the Diagonal of Sparse Matrices. | We consider techniques for permuting a sparse matrix so that the diagonal of the permuted matrix has entries of large absolute value. We discuss various criteria for this and consider their implementation as computer codes. We then indicate several cases where such a permutation can be useful. These include the solution of sparse equations by a direct method and by an iterative technique. We also consider its use in generating a preconditioner for an iterative method. We see that the effect of these reorderings can be dramatic although the best a priori strategy is by no means clear. | Introduction
We study algorithms for the permutation of a square unsymmetric sparse matrix A
of order n so that the diagonal of the permuted matrix has large entries. This can
be useful in several ways. If we wish to solve the system
where A is a nonsingular square matrix of order n and x and b are vectors of length
n, then a preordering to place large entries on the diagonal can be useful whether
direct or iterative methods are used for solution.
For direct methods, putting large entries on the diagonal suggests that pivoting
down the diagonal might be more stable. There is, of course, nothing rigorous in this
and indeed stability is not guaranteed. However, if we have a solution scheme like
the multifrontal method of Duff and Reid (1983), where a symbolic phase chooses
the initial pivotal sequence and the subsequent factorization phase then modifies
this sequence for stability, it can mean that the modification required is less than if
the permutation were not applied.
For iterative methods, simple techniques like Jacobi or Gauss-Seidel converge
more quickly if the diagonal entry is large relative to the off-diagonals in its row or
column and techniques like block iterative methods can benefit if the entries in the
diagonal blocks are large. Additionally, for preconditioning techniques, for example
for diagonal preconditioning or incomplete LU preconditioning, it is intuitively
evident that large diagonals should be beneficial.
We consider more precisely what we mean by such permutations in Section 2, and
we discuss algorithms for performing them and implementation issues in Section 3.
We consider the effect of these permutations when using direct methods of solution
in Section 4 and their use with iterative methods in Sections 5 and 6, discussing
the effect on preconditioning in the latter section. Finally, we consider some of the
implications of this current work in Section 7.
Throughout, the symbols jxj should be interpreted in context. If x is a scalar,
the modulus is intended; if x is a set, then the cardinality, or number of entries in
the set, is understood.
Permuting a matrix to have large diagonals
2.1 Transversals and maximum transversals
We say that an n \Theta n matrix A has a large diagonal if the absolute value of each
diagonal entry is large relative to the absolute values of the off-diagonal entries in
its row and column. We will be concerned with permuting the rows and columns
of the matrix so the resulting diagonal of the permuted matrix has this property.
That is, for the permuted matrix, we would like the ratio
to be large for all j, 1 - j - n. Of course, it is not even possible to ensure that
this ratio is greater than 1.0 for all j as the simple example
shows. It
is thus necessary to first scale the matrix before computing the permutation. An
appropriate scaling would be to scale the columns so that the largest entry in each
column is 1.0. The algorithm that we describe in Section 2.2 would then have the
effect of maximizing (2.1).
For an arbitrary nonsingular n \Theta n matrix, it is a necessary and sufficient
condition that for a set of n entries to be permuted to the diagonal, no two can
be in the same row and no two can be in the same column. Such a set of entries
is termed a maximum transversal, a concept that will be central to this paper and
which we now define more rigorously.
We let T denote a set of (at most n) ordered index pairs (i; j), 1 -
which each row index i and each column index j appears at most once. T is called
a transversal for matrix A, if a ij 6= 0 for each (i; . T is called a maximum
transversal if it has largest possible cardinality. jT j is equal to n if the matrix is
nonsingular. If indeed jT defines an n \Theta n permutation matrix P with
so that P T A is the matrix with the transversal entries on the diagonal.
In sparse system solution, a major use of transversal algorithms is in the
first stage of permuting matrices to block triangular form. The matrix is first
permuted by an unsymmetric permutation to make its diagonal zero-free after which
a symmetric permutation is used to obtain the block triangular form. An important
feature of this approach is that the block triangular form does not depend on which
transversal is found in the first stage (Duff 1977). A maximum transversal is also
required in the generalization of the block triangular ordering developed by (Pothen
and Fan 1990).
2.2 Bottleneck transversals
We will consider two strategies for obtaining a maximum transversal with large
transversal entries. The primary strategy that we consider in this paper is to
maximize the smallest value on the diagonal of the permuted matrix. That is, we
compute a maximum transversal T such that for any other maximum transversal T 1
we have
min
(i;j)2T
Transversal T is called a bottleneck transversal 1 , and the smallest value ja ij j, (i;
T , is called the bottleneck value of A. Equivalently, if jT smallest value
on the diagonal of P T A is maximized, over all permutations P, and equals the
bottleneck value of A.
An outline of an algorithm that computes a bottleneck transversal T 0 for a matrix
A is given below. We assume that we already have an algorithm for obtaining a
maximum transversal and denote by MT routine that returns a maximum
transversal for a matrix A, starting with the initial "guess" transversal T . We let
A ffl denote the matrix that is obtained by setting to zero in A all entries ja ij j for
which denote the transversal obtained by removing
from transversal T all the elements
Algorithm BT
Initialization:
Set fflmin to zero and fflmax to infinity.
while (there exist do
begin
choose
(We discuss how this is chosen later)
then
fflmin := ffl;
else
endif
Complete transversal for permutation;
(Needed if matrix structurally singular)
M is a maximum transversal for A, and hence jM j is the required cardinality
of the bottleneck transversal T 0 that is to be computed. If A is nonsingular, then
Throughout the algorithm, fflmax and fflmin are such that a maximum
transversal of size jM j does not exist for A fflmax but does exist for A fflmin . At each
step, ffl is chosen in the interval (fflmin; fflmax), and a maximum transversal for the
matrix A ffl is computed. If this transversal has size jM j, then fflmin is set to ffl,
1 The term bottleneck has been used for many years in assignment problems, for example
Glicksberg and Gross 1953)
otherwise fflmax is set to ffl. Hence, the size of the interval decreases at each step and
ffl will converge to the bottleneck value. After termination of the algorithm, T 0 is
the computed bottleneck transversal and ffl the corresponding bottleneck value. The
value for ffl is unique. The bottleneck transversal T 0 is not usually unique.
Algorithm BT makes use of algorithms for finding a maximum transversal. The
currently known algorithm with best asymptotic bound for finding a maximum
transversal is by Hopcroft and Karp (1973). It has a worst-case complexity of
O(
n- ), where - is the number of entries in the matrix. An efficient implementation
of this algorithm can be found in Duff and Wiberg (1988). The depth-first search
algorithm implemented by Duff (1981) in the Harwell Subroutine Library code MC21
has a theoretically worst-case behaviour of O(n- ), but in practice it behaves more
like O(n Because this latter algorithm is far simpler, we concentrate on this
in the following although we note that it is relatively straightforward to modify and
use the algorithm of Hopcroft and Karp (1973) in a similar way.
A limitation of algorithm BT is that it only maximizes the smallest value on
the diagonal of the permuted matrix. Although this means that the other diagonal
values are no smaller, they may not be maximal. Consider, for example, the 3 \Theta 3
ffiC A (2.2)
with ffi close to zero. Algorithm BT applied to this matrix returns
either the transversal f(1; 1); (2; 2); (3; 3)g or f(2; 1); (1; 2); (3; 3)g. Clearly, the latter
transversal is preferable. The modifications that we propose help to do this by
choosing large entries when possible for the early transversal entries.
It is beneficial to first permute the matrix to block triangular form and then to
use BT on only the blocks on the diagonal. This can be done since all entries in any
maximum transversal must lie in these blocks. Furthermore, not only does this mean
that BT operates on smaller matrices, but we also usually obtain a transversal of
better quality inasmuch as not only is the minimum diagonal entry maximized but
this is true for each block on the diagonal. Thus for matrix (2.2), the combination
of an ordering to block triangular form followed by BT would yield the preferred
transversal f(2; 1); (1; 2); (3; 3)g.
There are other possibilities for improving the diagonal values of the permuted
matrix which are not the smallest. One is to apply a row scaling subsequent to an
initial column scaling of the matrix A. This will increase the numerical values of all
the nonzero entries in those rows for which the maximum absolute numerical value
is less than one. A row scaling applied to the matrix (2.2) changes the coefficient
a 33 from ffi to 1:0, and now algorithm BT will compute f(2; 1); (1; 2); (3; 3)g as the
bottleneck transversal of the matrix (2.2). Unfortunately, such a row scaling does
not always help, as can be seen by the matrixB @
1:0 ffiC A
with the maximum transversals
all legitimate bottleneck transversals. Indeed the BT algorithm is very dependent on
scaling. For example, the matrix
has bottleneck transversal f(2; 1); (1; 2)g
whereas, if it is row scaled to
, the bottleneck transversal is f(1; 1); (2; 2)g.
Another possibility for improving the size of the diagonal values is to apply
algorithm BT repeatedly. Without loss of generality, suppose that, after application
of BT, entry a nn has the smallest diagonal value. Algorithm BT can then be
applied to the (n \Gamma 1) \Theta (n \Gamma 1) leading principal submatrix of A, and this could be
repeated until (after k steps) the (n \Gamma leading principal submatrix of
A only contains ones (on assumption original matrix was row and column scaled).
Obviously, this can be quite expensive, since algorithm BT is applied O(n) times
although we have a good starting point for the BT algorithm at each stage. We call
this algorithm the successive bottleneck transversal algorithm. Because of this and
the fact that we have found that it usually gives little improvement over BT, we do
not consider it further in this paper.
2.3 Maximum Product transversals
An algorithm yielding the same transversal independent of scaling is to maximize
the product of the moduli of entries on the diagonal, that is to find a permutation
oe so that
Y
a ioe i j (2.3)
is maximized. This is the strategy used for pivoting in full Gaussian elimination by
Olschowka and Neumaier (1996) and corresponds to obtaining a weighted bipartite
matching. Olschowka and Neumaier (1996) combine a permutation and scaling
strategy. The permutation, as in (2.3), maximizes the product of the diagonal entries
of the permuted matrix. (Clearly the product is zero if and only if the matrix is
structurally singular.) The scaling transforms the matrix into a so-called I-matrix,
whose diagonal entries are all one and whose off-diagonal entries are all less than or
equal to one.
Maximizing the product of the diagonal entries of A is equivalent to minimizing
the sum of the diagonal entries of a matrix that is defined as follows (we
here assume that denotes an n \Theta n nonnegative nonsingular matrix):
log a
where a is the maximum absolute value in column j of matrix A.
Minimizing the sum of the diagonal entries can be stated in terms of an
assignment problem and can be solved in O(n 3 ) time for full n \Theta n matrices or in
O(n- log n) time for sparse matrices with - entries. A bipartite weighted matching
algorithm is used to solve this problem. Applying this algorithm to C produces
vectors u, v and a transversal T , all of length n, such that
If we define
then, the scaled matrix is an I-matrix. We do not do this scaling
in our experiments but, unlike Olschowka and Neumaier, we use a sparse bipartite
weighted matching whereas they only considered full matrices.
The worst case complexity of this algorithm is O(n- log n). This is similar to BT,
although in practice it sometimes requires more work than BT. We have programmed
this algorithm, without the final scaling. We have called it algorithm MPD (for
Maximum Product on Diagonal) and compare it with BT and MC21 in the later
sections of this paper. Note that on the matrixB @
the MPD algorithm obtains the transversal f(1; 1); (2; 2); (3; 3)g whereas,
for example for Gaussian elimination down the diagonal, the transversal
would be better. Additionally, the fact that scaling does
influence the choice of bottleneck transversal could be deemed a useful characteristic.
3 Implementation of the BT algorithm
We now consider implementation details of algorithm BT from the previous section.
We will also illustrate its performance on some matrices from the Harwell-Boeing
Collection (Duff, Grimes and Lewis 1989) and the collection of Davis (1997). A code
implementing the BT algorithm will be included in a future release of the Harwell
Subroutine Library (HSL 1996).
When we are updating the transversal at stage (?) of algorithm BT, we can
easily accelerate the algorithm described in Section 2 by computing the value of the
minimum entry of the transversal, viz.
min
(i;j)2T
and then setting fflmin to this value rather than to ffl. The other issue, crucial
for efficiency, is the choice of ffl at the beginning of each step. If, at each step,
we choose ffl close to the value of fflmin then it is highly likely that we will find
a maximum transversal, but the total number of steps required to obtain the
bottleneck transversal can be very large. In the worst case, we could require
steps when the number of nonzero entries in A ffl reduces by only one at each iteration.
The algorithm converges faster if the size of the interval (fflmin; fflmax) reduces
significantly at each step. It would therefore appear sensible to choose ffl at each
step so that the interval is split into two almost equal subintervals, that is ffl -
(fflmin+fflmax)=2. However, if most of the nonzero values in A that have a magnitude
between fflmin and fflmax, are clustered near one of these endpoints, the possibility
exists that only a few nonzero values are discarded and the algorithm again will
proceed slowly. To avoid this, ffl should be chosen as the median of the nonzero
values between fflmin and fflmax.
We now consider how a transversal algorithm like MC21 can be modified to
implement algorithm BT efficiently. Before doing this, it is useful to describe briefly
how MC21 works. Each column of the matrix is searched in turn (called an original
column) and either an entry in a row with no transversal entry presently in the row
is found and this is made a transversal entry (a cheap assignment) or there is no
such entry and so the search moves to a previous column whose transversal entry is
in one of the rows with an entry in the original column. This new column is then
checked for a cheap assignment. If one exists, then this cheap assignment and the
entry in the original column in the row of the old transversal entry, replace that as
transversal entries thereby extending the length of the transversal by 1. If there is
no cheap assignment, then the search continues to other columns in a depth first
search fashion until a chain or augmenting path of the form
is found where there are no transversal entries in row i and every odd member of the
path is a transversal entry. The assignment is made in column j and the transversal
extended by 1 by replacing all transversal entries in the augmenting path with the
even members of this path.
Transversal selection algorithms like MC21 do not take into account the numerical
values of the nonzero entries. However, it is clear that the algorithm BT will
converge faster if T is chosen so that the value of its minimum entry is large. We
do this by noting that, when constructing an augmenting path, there are often
several candidates for a cheap assignment or for extending the path. MC21 makes an
arbitrary choice and we have modified it so the candidate with largest absolute value
is chosen. Note that this is a local strategy and does not guarantee that augmenting
paths with the highest values will be found.
The second modification aims at exploiting information obtained from previous
steps of algorithm BT. Algorithm BT repeatedly computes a maximum transversal
ffl ). The implementation of MC21 in the Harwell Subroutine Library
computes T from scratch, so we have modified it so that it can start with a partial
transversal. This can easily be achieved by holding the set of columns which contain
entries of the partial transversal and performing the depth search search through
that set of columns.
Of course, there are many ways to implement the choice of ffl. One alternative
is to maintain an array PTR (of length -) of pointers, such that the entries in the
first part of PTR point to those entries in A that form matrix A ffl max , the first two
parts of PTR point to the entries that form A ffl min
, and the elements in the third
part of PTR point to all the remaining (smaller) entries of A. A new value for ffl
can then be chosen directly (O(1) time) by picking the numerical value of an entry
that is pointed to by an element of the second part of PTR. After the assignment
in algorithm BT to either ffl min or ffl max , the second part of PTR has to be permuted
so that PTR again can be divided into three parts. An alternative is to do a global
(using a fast sorting algorithm) on all the entries of A, such that the elements
of PTR, point to the entries in order of decreasing absolute value. Then again PTR
can be divided into three parts as described in the previous alternative. By choosing
(in O(1) time) ffl equal to the numerical value of the entry pointed to by the median
element of the second part of PTR, ffl will divide the interval (ffl min
of close-to-equal size. Both alternatives have the advantage of being able to choose
the new ffl quickly, but require O(-) extra memory and (repeated) permutations of
the pointers.
We prefer an approach that is less expensive in memory and that matches our
transversal algorithm better. Since MC21 always searches the columns in order, we
facilitate the construction of the matrices A ffl , by first sorting the entries in each
column of the matrix A by decreasing absolute value. For a sparse matrix with
a well bounded number of entries in each column, this can be done in O(n) time.
The matrix A ffl is then implicitly defined by an array LEN of length n with LEN[j]
pointing to the first entry in column j of matrix A whose value is smaller than ffl,
which is the position immediately after the end of column j of matrix A ffl . Since the
entries of a column of A ffl are contiguous, the repeated modification of ffl by algorithm
BT, which redefines matrix A ffl , corresponds to simply changing the pointers in the
array LEN.
The actual choice of ffl at phase (?) in algorithm BT is done by selecting in
matrix A ffl min an entry that has an absolute value X such that ffl min
The columns of A ffl min
are searched until such an entry is found and ffl is set to its
absolute value. This search costs O(n) time since, for each column, we have direct
access to the entries with absolute values between ffl min and ffl max through the pointer
array LEN.
As mentioned before, by choosing ffl carefully, we can speed up algorithm BT
considerably. Therefore, instead of choosing an arbitrary entry from the matrix to
define ffl, we can choose a number (k say) of entries lying between ffl min and ffl max at
random, sort them by absolute value, and then set ffl to the absolute value of the
median element. 2 In our implementation we used
The set of matrices that we used for our experiments are unsymmetric matrices
taken from the sparse matrix collections Duff, Grimes and Lewis (1992) and Davis
(1997).
Table
3.1 shows the order, number of entries, and the time to compute a
bottleneck transversal for each matrix. All matrices are initially row and column
scaled. By this we mean that the matrix is scaled so that the maximum entry in
each row and in each column is one.
The machine used for the experiments in this and the following sections is a 166
MHz SUN ULTRA-2. The algorithms are implemented in Fortran 77.
Matrix n - Time in secs
GOODWIN 7320 324784 0.27 2.26 1.82
ONETONE2 36057 227628 2.63 0.53 0.42
Table
3.1: Times for transversal algorithms. Order of matrix is n and number of
entries - .
2 This is a technique commonly used to speed up sorting algorithms like quicksort.
4 The solution of equations by direct methods
MCSPARSE, a parallel direct unsymmetric linear system solver developed by
Gallivan, Marsolf and Wijshoff (1996), uses a reordering to identify a priori large and
medium grain parallelism and to reorder the matrix to bordered block triangular
form. Their ordering uses an initial nonsymmetric ordering that enhances the
numerical properties of the factorization, and subsequent symmetric orderings
are used to obtain a bordered block triangular matrix (Wijshoff 1989). The
nonsymmetric ordering is effectively a modified version of MC21. During each search
phase, for both a cheap assignment and an augmenting path, an entry a ij is selected
only if its absolute value is within a bound ff, 0 - ff - 1, of the largest entry in
column j. Instead of taking the first entry that is found by the search that satisfies
the threshold, the algorithm scans all of the column for the entry with the largest
absolute value.
The algorithm starts off with an initial bound ff = 0:1. If a maximum transversal
cannot be found, then the values in each column are examined to determine the
maximum value of the bound that would have allowed an assignment to take place
for that column. The new bound is then set to the minimum of the bound estimates
from all the failed columns and the algorithm is restarted. If a bound less than a
preset limit is tried and a transversal is still not found, then the bound is ignored
and the code finds any transversal. In our terminology (assuming an initial column
scaling of the matrix) this means that a maximum transversal of size n is computed
for the matrix A ff .
In the multifrontal approach of Duff and Reid (1983), later developed by Amestoy
and Duff (1989), an analysis is performed on the structure of A A T to obtain
an ordering that reduces fill-in under the assumption that all diagonal entries will
be numerically suitable for pivoting. The numerical factorization is guided by an
assembly tree. At each node of the tree, some steps of Gaussian elimination are
performed on a dense submatrix whose Schur complement is then passed to the
parent node in the tree where it is assembled (or summed) with Schur complements
from the other children and original entries of the matrix. If, however, numerical
considerations prevent us from choosing a pivot then the algorithm can proceed, but
now the Schur complement that is passed to the parent is larger and usually more
work and storage will be needed to effect the factorization.
The logic of first permuting the matrix so that there are large entries on the
diagonal, before computing the ordering to reduce fill-in, is to try and reduce the
number of pivots that are delayed in this way thereby reducing storage and work for
the factorization. We show the effect of this in Table 4.1 where we can see that even
using MC21 can be very beneficial although the BT algorithm can show significant
further gains and sometimes the use of MPD can cause further significant reduction
in the number of delayed pivots. We should add that the numerical accuracy of
the solution is sometimes slightly improved by these permutations and, in all cases,
good solutions were found.
Matrix Transversal algorithm used
None MC21 BT MPD
GOODWIN 536 1622 358 53
Table
4.1: Number of delayed pivots in factorization from MA41. An "-" indicates
that MA41 requires a real working space larger than 25 million words (of 8 bytes).
In
Table
4.2, we show the effect of this on the number of entries in the factors.
this mirrors the results in Table 4.1 and shows the benefits of the transversal
selection algorithms. This effect is seen in Table 4.3 where we can sometimes observe
a dramatic reduction in time for solution when preceded by a permutation.
Matrix Transversal algorithm used
None MC21 BT MPD
ONETONE2 14082683 2875603 2167523 2169903
GOODWIN 1263104 2673318 1791112 1282004
Table
4.2: Number of entries in the factors from MA41.
In addition to being able to select the pivots chosen by the analysis phase, the
multifrontal code MA41 will do better on matrices whose structure is symmetric
or nearly so. The transversal orderings in some cases increase the symmetry of the
resulting reordered matrix. This is particularly apparent when we have a very sparse
system with many zeros on the diagonal. In that case, the reduction in number of off-diagonal
entries in the reordered matrix has an influence on the symmetry. Notice
that, in this respect, the more sophisticated transversal algorithms may actually
cause problems since they could reorder a symmetrically structured matrix with a
zero-free diagonal, whereas MC21 will leave it unchanged.
Matrix Transversal algorithm used
None MC21 BT MPD
GOODWIN 3.64 14.63 6.00 3.56
Table
4.3: Time (in seconds on Sun ULTRA-2) for MA41 for solution of system.
5 The solution of equations by iterative methods
A large family of iterative methods, the so-called stationary methods, has the
iteration scheme
is a splitting of A, and M is chosen such that a system of the
is easy to solve. If M is invertible, (5.1) can be written as
We have
where ae is the spectral radius, so that, if jjM convergence of the
iterates x (k) to the solution A \Gamma1 b is guaranteed for arbitrary x (0) . In general, the
smaller jjM the faster the convergence. Thus an algorithm that makes
entries in M large and those in N small should be beneficial.
The most simple method of this type is the Jacobi method, corresponding to the
splitting denotes the diagonal, L the strictly
lower triangular part, and U the strictly upper triangular part of the matrix A.
However, this is not a particularly current or powerful method so we conduct our
experiments using the block Cimmino implementation of Arioli, Duff, Noailles and
Ruiz (1992), which is equivalent to using a block Jacobi algorithm on the normal
equations. In this implementation, the subproblems corresponding to blocks of rows
from the matrix are solved by a direct method similar to that considered in the
previous section. For similar reasons, it can be beneficial to increase the magnitude
of the diagonal entries through unsymmetric permutations.
We show the effect of this in Table 5.1, where we see that the number of iterations
for the solution of the problem MAHINDAS 7682). The convergence
tolerance was set to 10 \Gamma12 . The transversal selection algorithm was followed by a
reverse Cuthill McKee algorithm to obtain a block tridiagonal form. The matrix
was partitioned in 2, 4, 8, and 16 block rows and the acceleration used was a block
CG algorithm with block sizes of 1, 4, and 8.
Acceleration
# block rows None MC21 BT MPD
Table
5.1: Number of iterations of block Cimmino algorithm on MAHINDAS.
In every case, the use of a transversal algorithm accelerates the convergence of
the method, sometimes by a significant amount. However, the use of the algorithms
to increase the size of the diagonal entries does not usually help convergence further.
The convergence of block Cimmino depends on angles between subspaces which is
not so strongly influenced by the diagonal entries.
6 Preconditioning
In this section, we consider the effect of using a permutation induced by our
transversal algorithms prior to solving a system using a preconditioned iterative
method. We consider preconditionings corresponding to incomplete factorizations
of the form ILU(0), ILU(1), and ILUT and study the convergence of the iterative
methods GMRES(20), BiCGSTAB, and QMR. We refer the reader to a standard
text like that of Saad (1996) for a description and discussion of these methods. Since
the diagonal of the permuted matrix is "more dominant" than the diagonal of the
original matrix, we would hope that such permutations would enhance convergence.
We show the results of some of our runs in Table 6.1. The maximum number
of iterations was set to 1000 and the convergence tolerance to 10 \Gamma9 . It is quite
clear that the reorderings can have a significant effect on the convergence of the
preconditioned iterative method. In some cases, the method will only converge after
the permutation, in others it greatly improves the convergence. It would appear
from the results in Table 6.1 and other experiments that we have performed, that
the more sophisticated MPD transversal algorithm generally results in the greatest
reduction in the number of iterations, although the best method will depend on the
overall solution time including the transversal selection algorithm.
7 Conclusions and future work
We have described algorithms for obtaining transversals with large entries and have
indicated how they can be implemented showing that resulting programmes can be
written for efficient performance.
While it is clear that reordering matrices so that the permuted matrix has a
large diagonal can have a very significant effect on solving sparse systems by a wide
range of techniques, it is somewhat less clear that there is a universal strategy that is
best in all cases. We have thus started experimenting with combining the strategies
mentioned in this paper and, particularly for the block Cimmino approach, with
combining our unsymmetric ordering with a symmetric ordering. One example that
we plan to study is a combination with the symmetric TPABLO ordering (Benzi,
Choi and Szyld 1997).
It is possible to extend our techniques to orderings that try to increase the size
of not just the diagonal but also the immediate sub and super diagonals and then
use the resulting tridiagonal part of the matrix as a preconditioner.
One can also build other criteria into the weighting for obtaining a bipartite
matching, for example, to incorporate a Markowitz count so that sparsity would
also be preserved by the choice of the resulting diagonal as a pivot.
Finally, we noticed in our experiments with MA41 that one effect of transversal
selection was to increase the structural symmetry of unsymmetric matrices. We are
thus exploring further the use of ordering techniques that more directly attempt to
increase structural symmetry.
Acknowledgments
We are grateful to Patrick Amestoy of ENSEEIHT, Michele Benzi of CERFACS, and
Daniel Ruiz of ENSEEIHT for their assistance with the experiments on the direct
methods, the preconditioned iterative methods, and the block iterative methods
respectively. We would also like to thank Alex Pothen for some early discussions on
bottleneck transversals, and John Reid and Jennifer Scott for comments on a draft
of this paper.
Matrix and method Transversal algorithm
BiCGSTAB 123 21 11
QMR 101 26 17
QMR 72 19 12
MAHINDAS
WEST0497
Table
6.1: Number of iterations required by some preconditioned iterative methods.
--R
Threshold ordering for preconditioning nonsymmetric problems
The design and use of a frontal scheme for solving sparse unsymmetric equations
Users' guide for the Harwell-Boeing sparse matrix collection (Release I)
A production line assignment problem
Iterative methods for sparse linear systems
Symmetric orderings for unsymmetric sparse matrices
--TR
--CTR
Iain S. Duff , Jennifer A. Scott, Stabilized bordered block diagonal forms for parallel sparse solvers, Parallel Computing, v.31 n.3+4, p.275-289, March/April 2005
Kai Shen, Parallel sparse LU factorization on second-class message passing platforms, Proceedings of the 19th annual international conference on Supercomputing, June 20-22, 2005, Cambridge, Massachusetts
Olaf Schenk , Klaus Grtner, Two-level dynamic scheduling in PARDISO: improved scalability on shared memory multiprocessing systems, Parallel Computing, v.28 n.2, p.187-197, February 2002
Olaf Schenk , Andreas Wchter , Michael Hagemann, Matching-based preprocessing algorithms to the solution of saddle-point problems in large-scale nonconvex interior-point optimization, Computational Optimization and Applications, v.36 n.2-3, p.321-341, April 2007
Olaf Schenk , Klaus Grtner, Solving unsymmetric sparse systems of linear equations with PARDISO, Future Generation Computer Systems, v.20 n.3, p.475-487, April 2004
Patrick R. Amestoy , Iain S. Duff , Jean-Yves L'excellent , Xiaoye S. Li, Analysis and comparison of two general sparse solvers for distributed memory computers, ACM Transactions on Mathematical Software (TOMS), v.27 n.4, p.388-421, December 2001
Kai Shen, Parallel sparse LU factorization on different message passing platforms, Journal of Parallel and Distributed Computing, v.66 n.11, p.1387-1403, November 2006
Xiaoye S. Li, An overview of SuperLU: Algorithms, implementation, and user interface, ACM Transactions on Mathematical Software (TOMS), v.31 n.3, p.302-325, September 2005
Anshul Gupta, Recent advances in direct methods for solving unsymmetric sparse systems of linear equations, ACM Transactions on Mathematical Software (TOMS), v.28 n.3, p.301-324, September 2002
Anwar Hussein , Ke Chen, Fast computational methods for locating fold points for the power flow equations, Journal of Computational and Applied Mathematics, v.164-165 n.1, p.419-430, 1 March 2004
Jack Dongarra , Victor Eijkhout , Piotr uszczek, Recursive approach in sparse matrix LU factorization, Scientific Programming, v.9 n.1, p.51-60, January 2001
Belur V. Dasarathy, Editorial: Identity fusion in unsupervised environments, Information Fusion, v.7 n.2, p.157-160, June, 2006
Xiaoye S. Li , James W. Demmel, SuperLU_DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems, ACM Transactions on Mathematical Software (TOMS), v.29 n.2, p.110-140, June
Nicholas I. M. Gould , Jennifer A. Scott , Yifan Hu, A numerical evaluation of sparse direct solvers for the solution of large sparse symmetric linear systems of equations, ACM Transactions on Mathematical Software (TOMS), v.33 n.2, p.10-es, June 2007
Michele Benzi, Preconditioning techniques for large linear systems: a survey, Journal of Computational Physics, v.182 n.2, p.418-477, November 2002 | sparse matrices;iterative methods;maximum transversal;preconditioning;direct methods |
328033 | New Parallel SOR Method by Domain Partitioning. | In this paper we propose and analyze a new parallel SOR method, the PSOR method, formulated by using domain partitioning and interprocessor data communication techniques. We prove that the PSOR method has the same asymptotic rate of convergence as the Red/Black (R/B) SOR method for the five-point stencil on both strip and block partitions, and as the four-color (R/B/G/O) SOR method for the nine-point stencil on strip partitions. We also demonstrate the parallel performance of the PSOR method on four different MIMD multiprocessors (a KSR1, an Intel Delta, a Paragon, and an IBM SP2). Finally, we compare the parallel performance of PSOR, R/B SOR, and R/B/G/O SOR. Numerical results on the Paragon indicate that PSOR is more efficient than R/B SOR and R/B/G/O SOR in both computation and interprocessor data communication. | Introduction
. The successive over-relaxation (SOR) iterative method is an
important solver for large linear systems [22]. It is also a robust smoother as well as an
efficient solver of the coarsest grid equations in the multigrid method [20]. However, the
SOR method is essentially sequential in its original form. With the increasing use of
parallel computers, several parallel versions of the SOR method have been studied by a
number of authors. Adams and Ortega [1], Adams and Jordan [2], and Adams, Leveque
and Young [3] have written about the multicolor SOR method; Block, Frommer and
Mayer [5] wrote about a general block multicolor SOR method; and White [19] wrote
about the multisplitting SOR method. We also note the papers by Evans [6] and Patel
and Jordan [12], which presented two parallel SOR methods for particular parallel
computers.
The motivation for us to develop a new parallel version of SOR is to provide parallel
multigrid methods with an efficient parallel solver of the coarsest equations [21]. For
some scientific computing problems, the size of the coarsest equations of the multigrid
method is required to be large enough. On the other hand, the convergence rate of
multigrid methods is dependent of the number of grid levels, especially for singular
perturbed problems. So, with the size of the coarsest equations being properly large,
we can improve the performance of the multigrid method, along with avoiding idle
processors on parallel machines [16]. Since domain decomposition is a widely-used
approach in the implementation of parallel multigrid methods on MIMD computers, it
is attractive to develop a parallel SOR method based on the domain decomposition.
Using a domain decomposition technique, we recently proposed a simple parallel
version of the SOR method, the JSOR method [15]. Let a grid mesh domain be
partitioned into p disjoint subgrids. The JSOR method is obtained by concurrently
applying one sweep of the SOR algorithm to each subgrid using the previous iterates
as the "boundary values". By mapping each subgrid into one processor, JSOR can
be easily implemented on MIMD computers with only one step of interprocessor-date-
communication per iteration. However, due to the slow convergence speed, the JSOR
Submitted to SIAM J. Sci. Comput. and 1996 COPPER MOUNTAIN CONFERENCE ON ITERATIVE
METHODS. This work was supported in part by the National Science Foundation through
award number DMS-9105437 and ASC-9318159.
y Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York,
NY 10012, (xie@monod.biomath.nyu.edu).
rarely used as a parallel solver for linear systems. Instead, it can be an
efficient smoother for parallel multigrid methods as shown in [20] and [21].
To improve the convergence properties of JSOR, we modify the pattern of inter-processor
data communication of the JSOR method. As a result, a new type of parallel
SOR method, which is called the PSOR method, is generated from JSOR. In this paper
we prove that PSOR can have the same asymptotic rate of convergence as the sequential
method. Therefore, an efficient parallel SOR method based on domain decomposition
is obtained in this paper for solving the linear systems arising from finite deference
or finite element approximations of partial differential equations.
A model analysis of the PSOR method is presented in Section 2. For the 5-point
stencil model problem, it can be easily proved that PSOR and the SOR using the natural
row-wise ordering have the same asymptotic rate of convergence. In fact, PSOR can
be equivalent to a SOR method using a new ordering. We show that the new ordering
based on either a strip partition as shown in Fig.5 or a block partition as shown in
Fig.6 leads to a consistently ordered matrix. Hence, the SOR theory in [22] follows
that PSOR has the same asymptotic rate of convergence as the SOR using the natural
row-wise ordering.
However, for the general linear system arising from finite deference or finite element
approximations of elliptic boundary value problems, it is difficult to show a global
ordering, with which the SOR method is equivalent to the PSOR method, leads to
a consistently ordered matrix. Instead, we want to give the PSOR method a direct
analysis in this paper.
We present a general description of PSOR in Section 3. Here the PSOR method is
defined on a general domain partition. Then, in Section 4, we prove a basic convergence
theorem for the linear systems in which the corresponding matrix is symmetric positive
definite. The main result of the paper is also presented in Section 4, which shows that the
spectral radius of the PSOR iteration matrix can be the same as that of the SOR iteration
matrix for a wide class of linear systems in which the corresponding sub-matrix on each
subgrid is "consistently ordered". In Section 5, we first confirm the PSOR method
based on either a strip partition or a block partition has the same asymptotic rate of
convergence. Then, we demonstrate the parallel performance of the PSOR method on
a shared memory MIMD computer (a KSR1) and three distributed memory MIMD
computers (the Intel Delta, an Intel Paragon L38 and an IBM POWERparallel System
9076 SP2). The numerical results show that PSOR is very efficient on these distinct
multiprocessor machines. Since the multi-color SOR method is a widely-used parallel
version of the SOR method, we give a comparison between the PSOR method and the
the Red/Black SOR method for the 5-point stencil as well as the four-color SOR method
for the 9-point stencil [3] on parallel computers in Section 6. The numerical results show
that PSOR can have much better performance than both Red/Black and four-color SOR
methods in either floating operations or interprocessor date-communication.
2. The Model Problem Analysis. We consider the 5-point approximation to
the Poisson's equation on a unit square with zero boundary data
(1)
and
@\Omega h . Here u ij denote the approximation of u(x
,\Omega h and
@\Omega h are the sets of the interior and
boundary mesh points, respectively.
Fig. 1. Natural row-wise
ordering.
Fig. 2. Red-Black ordering.
Under some ordering of unknowns, (1) can be written in a matrix form
A being a (m \Gamma matrices. Obviously,
there are many ways to order the unknowns, but the natural row-wise ordering as shown
in Fig.1 and the Red-Black ordering as shown in Fig.2 are two widely-used orderings
in practice.
The SOR method using the natural row-wise ordering generates a sequence of iterates
from a given initial guess u (0)
ij and a real number ! 2 (0; 2) by the form
(2)
which is completely sequential.
The SOR method with the Red-Black ordering, which is usually called the Red-Black
method, takes the form
which can be entirely implemented in parallel on the same colors.
For the model problem, it has been shown that the SOR using the natural ordering
and the Red-Black SOR method have the same convergence rate [22].
JSOR is a simple parallel version of SOR by domain decomposition, which has
been analyzed in [15] recently. For simplicity, we suppose that the grid
mesh\Omega h is
partitioned into p
strips\Omega h;- for and each of them contains at least two
grid lines. We denote
h;- the set of the mesh points of the first grid line
h;-
=\Omega h;-
h;- . Then the JSOR method takes the form
p. Note that (3) is the same as (2).
Clearly, the above JSOR scheme can be implemented in parallel on p processors
by mapping the equations
on\Omega h;- , (3) and (4), into Processor - for
pseudo-code of JSOR on p processors is given by
JSOR Algorithm: For in parallel
Compute
on\Omega h;- by using (3) and (4).
Communicate u (k+1)
ij to other processors as needed.
However, the numerical experiments in [15] showed that the convergence rate of
JSOR is slow down almost linearly with respect to the number p of strips. Hence, it
is rarely used as a parallel solver for linear systems instead of an efficient smoother for
parallel multigrid methods [20] [21].
Notice that the interprocessor data-communication of JSOR on the strip partition
case is as follows:
a) Send u (k+1)
h;- from Processor - to Processor
ij on the last grid line
of\Omega h;- from Processor - to Processor -
If we carry out Step a) before the computation of (4)
h;- , then the update
can be available when we compute the updates u (k+1)
on the last grid line
of\Omega h;- , such that they are defined by
Consequently, a new type of parallel SOR, the PSOR method, is generated from the
method. A pseudo-code of PSOR on p processors is given by
PSOR Algorithm: For in parallel
Compute
h;- by using (3).
h;- from Processor - to Processor
Compute
h;- by using (4) and (5).
ij on the last grid line
of\Omega h;- from Processor - to Processor -
Remarkably, it can be easily shown that PSOR can have the same asymptotic rate
of convergence as the SOR iteration (2) for the model problem.
In fact, the PSOR with a strip partition is equivalent to the SOR scheme using a
new ordering as shown in Fig.5. Let be the matrix associated to the new
ordering. We then can show that A is a consistently ordered matrix as follows.
For the model problem, we have that a ij 6= 0 if and only if mesh node i is adjacent
to node j. According to the definition of a consistently ordered matrix [22], we construct
a disjoint partition of the index set
Wh,1 h,2 h,3
Wh,7 h,8 h,9
Fig. 3. A block partition of the grid
mesh
domain\Omega h .
Fig. 4. The local ordering number
of each
block\Omega h;- , which can lead to a
consistently ordered sub-matrix.
26 27 28 29
28
12192Fig. 5. Global ordering of PSOR on a strip
partition, which can lead to a consistently ordered
matrix.
Fig. 6. Global ordering of PSOR on a block
partition, which also can lead to a consistently ordered
matrix.
such that if a i;j 6= 0 and i This shows
that A is a consistently ordered matrix. Therefore, the SOR theory in [22] follows that
the PSOR with strip partition has the same asymptotic rate of convergence as the SOR
iteration (2).
Similarly, we also can define the PSOR method on a block partition as shown in
Fig.3. At each
block\Omega h;- , PSOR uses a particular local ordering of the mesh points
as shown in Fig.4 in order to communicate the data between processors efficiently. The
vector U - associated
with\Omega h;- has the following partitioning:
where the superscript t denotes a vector transpose,
We can show that such a local ordering within each
sub-grid\Omega h;- can lead to a
consistently ordered matrix. For example, we construct the following subsets for the
ordering shown in Fig.4
such that they satisfy the definition of a consistently ordered matrix.
The implementation of the PSOR method on a block partition is the same as the
strip partition case except that the first component u 1 of each U 1
- should be sent to
other processors as needed as soon as it is updated.
Like the strip partition case, PSOR on a block partition is also equivalent to the
SOR method which is with a new global ordering as shown in Fig.6. We also can show
that the new ordering can lead to a consistently ordered matrix. In fact, we have the
following subsets for the ordering shown in Fig.6:
which satisfy the definition of a consistently ordered matrix. Therefore, the PSOR on
the block partition has the same convergence rate as the SOR using the natural ordering.
3. PSOR Method. In this section, we shall give the PSOR method a general
description. We consider the solution of linear system
which is supposed to arise from a finite element or a finite different discretization of
an elliptic boundary value problem. Here n\Thetan is a sparse matrix, f and u are
real column vectors of order n.
Let\Omega h denote the set of mesh points on which the unknown vector u is defined.
We partition it into p disjoint
subgrids\Omega h;- such that
Based on this mesh partition, the linear system (7) can be written into a block form
Here U - and F - comprise, respectively, the components of u and f associated with the
mesh points on the
subgrid\Omega h;- , and A - is formed from A by deleting all rows except
those corresponding
to\Omega h;- and all columns except those corresponding
to\Omega h;- .
If the entry a ij of matrix A is not zero, then the corresponding mesh points i and j
are said to be coupled. According to this, we denote
h;- the set that comprises the
mesh points
of\Omega h;- that are coupled to the mesh points
of\Omega h;- with -. We then
h;-
=\Omega h;-
h;- , and assume
h;- is nonempty. Thus, each subgrid mesh
\Omega h;- can be partitioned into two disjoint nonempty subsets
\Omega h;-
h;-
Further, we assume that the mesh points of each
subgrid\Omega h;- are ordered in such
a way that the matrix A - related to the
subgrid\Omega h;- is consistently ordered, and the
sub-vector U - satisfies
Here U 1
- and U 2
- comprise the components of u associated with the mesh points on the
h;-
h;- , respectively. One of such examples has been illustrated in (6).
Associated with partition (9), each sub-matrix A - can be written into the following
block form
A 11
A 21
- A 22
Moreover, from the definition
h;- it follows that a nonzero matrix A - with - 6= -
can be reduced to
for -, and A
A 21
for -:
Here A 12
- and A 21
are nonzero.
In fact, let a ij be a nonzero entry of A - with -. The definition of A - follows
that i
2\Omega h;- and j
2\Omega h;- . In other words, mesh point i
of\Omega h;- is said to be coupled
to mesh point j
in\Omega h;- . According to the definition
h;- , we have that mesh point i
belongs
h;- . Similarly, noting that the symmetry of A gives a 0, we can
show mesh point j is
h;- . Hence, a ij must be an entry of A 12
- . This shows A 12
- is
nonzero. On the other hand, other sub-matrices of A - must be zero.
Consequently,
#/
A 21
A 12
A 21
Hence, (8) is written into an equivalent form
A 11
A 21
- A 22
#/
A 12
A 21
Let the k-th PSOR iterate be denoted by u
Suppose that u (k) is given. We can construct the following linear system
h;-
A 11
A 12
and define U 1;(k+1)
- as the value of one SOR iterate for solving (11).
Clearly, the p linear systems in (11) are independent. So, we can compute U 1;(k+1)
in parallel on p processors. When they are available, we communicate U 1;(k+1)
- for
to other processors as needed, such that we get a linear system
h;-
as follows:
A 22
A 21
We then define U 2;(k+1)
- as the value of one SOR iterate for solving (12). Obviously, the
computation of U 2;(k+1)
- can also be implemented in parallel on p processors. Therefore,
one step of PSOR iteration is defined by the following algorithm.
PSOR Algorithm: For
- as the value of one SOR iterate for solving (11)
h;- .
Communicate U 1;(k+1)
- to other processors as needed.
- as the value of one SOR iterate for solving (12 )
h;- .
Communicate U 2;(k+1)
- to other processors as needed.
Let c -;1 and c -;2 denote the right hand sides of (11) and (12), respectively. With
the matrix expression of SOR [22], we express U 1;(k+1)
- and U 2;(k+1)
- as follows:
U
where D -;j is the diagonal matrix of A jj
I is an identity matrix, and L -;j and U -;j
are respectively strictly lower and upper triangular matrices such that
-;j A jj
2:
We then write the above two equations into a block form
where D - is the diagonal matrix of A - , and L - and U - are respectively strictly lower
and upper triangular matrices such that
Here we also used the following equalities:
A 12
A 21
Since the p block equations in (13) are independent, we can number the subgrids
f\Omega h;- g in an arbitrary ordering to form a global matrix . For the simplicity,
in the following, we still denote
Further, we introduce the following notation:
A
A 31 A
A p1 A p2
and
Using them, we can write (13) into a block form
or, equivalently,
Since the determinant of I \Gamma !(B+M) is 1, matrix I \Gamma !(B +M) is nonsingular. So we
can solve (15) for u (k+1) obtaining a matrix expression of the PSOR iterate as follows
e
and F We refer to M PSOR (!) as the PSOR iteration
matrix.
the SOR method reduces to the Gauss-Seidel method. Similarly, we refer
to the PSOR with as the PGS (parallel Gauss-Seidel) method. From (17) it
follows the PGS iteration matrix
4. PSOR Analysis. In this section we study the convergence of the PSOR
method. We denote by ae(A) the spectral radius of matrix A, which is defined as
the maximum of the moduli of the eigenvalues of A. The determinant of a matrix A
is denoted as det(A). It is well known that a necessary and sufficient condition for the
convergence of a linear stationary iteration
is that
Here G is the iteration matrix. Hence, we only need to study ae(M PSOR (!)) for the
convergence of the PSOR method.
We first establish a necessary condition for the convergence of the PSOR method.
theorem 1. Let M PSOR be the PSOR iteration matrix. If ae(M PSOR
the PSOR method is convergent, then 2:
Proof. By a well-known theorem of linear algebra, we know that det(A) is equal to
the product of the eigenvalues of matrix A. With the definitions of B; C; M , and N , we
have
Y
and
Y
Thus,
Hence, if ae(M PSOR 2. This completes
the proof of Theorem 1.
A sufficient convergence condition for the PSOR method is given in the following
theorem.
theorem 2. Let M PSOR be the PSOR iteration matrix. Suppose that A is a symmetric
positive definite matrix.
Proof. Let x be an eigenvector of M PSOR (!), and - be its et x be an eigenvector
of M PSOR (!), and - be its corresponding eigenvalue such that
By (18), the above equality is written as
Multiplying x t D to the both sides of (21), we find
r 2 are real numbers, and
\Gamma1: Clearly, the symmetry of matrix A gives that
Thus,
By the positive definiteness of A, we have that for any x 6= 0,
Hence, j-j ! 1. This follows that ae(M PSOR (!)) ! 1. This completes the proof of
Theorem 2.
We now turn to show that both PSOR and SOR can have the same asymptotic
convergent rate. Essentially, we only need to assume that each sub-matrix A - with
consistently ordered in the following theorems. However, due to
the independence of the p block equations in (13), we can chose a ordering of the
subgrids
f\Omega h;- g as shown in Fig.7 so that the corresponding global matrix
consistently ordered too. Hence, in the following we can assume that A is consistently
ordered.
Fig. 7. A Global ordering which results a matrix partition A = (A- ), such that both A and all
of the sub-matrices A- are consistently ordered for the 5-point stencil.
Similar to Theorem 3.3 (p.147) of the SOR theory in [22], we have the following
theorem for PSOR.
theorem 3. Let A have a block partition
. Let ff and k be real numbers. If A and A - for all are
consistently ordered, then for ff 6= 0 and for all k,
is independent of ff.
Proof. Let be a permutation defined on the integers
is an entry of D \Gamma1 Bg; is an entry of D \Gamma1 Cg;
is an entry of D \Gamma1 Ng; and is an entry of D \Gamma1 Mg:
The general term of \Delta is
Y
a ioe(i) ff nB +nM \Gamman C \Gamman N k n\Gamma(n B +nM +nC +nN
are, respectively, the number of values of i such that (i; oe(i)) 2
, such that (i; oe(i)) 2 such that (i; oe(i)) 2 TM and such that (i; oe(i)) 2 TN . Since
only if there exists one a only need to consider the terms of
a
Let nL and nU be , respectively, the number of values of i such that i ? oe(i) and
such that i ! oe(i). Obviously, we have
where l k and - k are, respectively, the number of values of i such that i ? oe(i) and such
that well as such that a ioe(i) is an entry of A kk .
Since A and A - for all are consistently ordered, from the proof of
Theorem 3.3 (p.147) in [22] we have
It follows that
(l
and then
Therefore,
This shows that t(oe) is independent of ff and the proof of Theorem 3 is completed.
Let D be a diagonal matrix of A, L and U are respectively strictly lower and upper
triangular matrices such that
Obviously, we have
From [22] we know that the SOR iteration matrix
and the Jacobi iteration matrix
The following theorem shows that PSOR and SOR have the same asymptotic convergent
rate.
theorem 4. Let M J ; M SOR and M PSOR be the iteration matrices of the Jacobi,
SOR and PSOR methods, respectively. We assume that matrix A has a block partition
such that both A and the sub-matrix A - for all are
consistently ordered. If M J has real eigenvalues and ae(M J
is the optimal relaxation parameter, which has the following expression:
Proof. Let - 6= 0. With (18) and (19), we have
det
det
det
using Theorem 3 in the last step.
If - is is an eigenvalue of M PSOR , then
det
which implies that
is an eigenvalue of M J . Conversely, if - ? 0 is an eigenvalue of M J , then there
always exists an eigenvalue of M PSOR (!) which satisfies (25), and the value of - can
be computed from the equation
From [22] we know that the eigenvalues - of the SOR iteration matrix M SOR also satisfy
equation (26). Hence, the sets of the eigenvalues of both M PSOR and M SOR are the
same. Therefore, we have
Noting that ae(M SOR (!)) has been shown to have expression (24) in [22], we conclude
that (24) holds for the PSOR method also. This completes the proof of Theorem 4.
5. Numerical Examples. In this section, we first confirm that the PSOR method
on either a strip partition or a block partition has the same asymptotic convergence rate
as the SOR method. We then highlight the parallel performance of the PSOR method
on four large parallel MIMD machines: a KSR1, the Intel Delta, an Intel Paragon L38
and an IBM POWERparallel System 9076 SP2. The numerical results demonstrate the
high efficiency of the PSOR method on parallel MIMD computers.
Two PSOR programs were written in Pfortran [4] and MPI (a Message-Passing
Interface Standard) [17], respectively. The PSOR programs were compiled with optimization
level \GammaO2 on the KSR1 and the SP2 and \GammaO4 on the Intel Delta and the
Paragon, respectively. In the Pfortran program, the CPU time was computed by using
the dclock() system routine on the Intel Delta and the Paragon, the user seconds()
system routine on the KSR1, and the mclock() on the SP2. In the MPI program, we
used MP I WTIME() function of MPI.
In the figures and tables, we have used the following annotations.
Linear Time for p processor case is defined by T (1)
is the CPU time
needed to solve the given problem on one processor. This stands for the ideal case.
T otal Time represents the CPU time spent from the beginning of the iteration until
either (28) is satisfied or the limited number of iterations is reached. It does not include
the CPU time spent on the calculation of f , the initial guess, and the input/output
of data. Comm: Time represents the CPU time spent on the interprocessor data
communication. Comp: Time represents the CPU time spent on the computation of
the iteration, including the L 2 -norm of the residual. Other
Comm: Time \Gamma Comp: Time, which includes the time spent on the global summations
in the computation of (28). This also indicates the accuracy of our time measurements.
We first considered the model problem (1) with f(x; sin -x sin -y. Clearly,
sin -y is the exact solution of the model problem. For simplicity, we fixed the
1=513, the relaxation parameter and the initial guess u
for all of the numerical experiments in this section. In this experiment, we also fixed
the number of PSOR iterations as 1000. With the Pfortran program, we implemented
PSOR on 1, 4, processors of Paragon, respectively. In the strip case, the
grid mesh was partitioned into p strips with equal sizes when PSOR was implemented
on p processors. In the block case, the grid mesh was divided into 2 \Theta 2, 4 \Theta 4, 8 \Theta 8
and blocks with equal sizes when PSOR was implemented on 4, 16 , 64 and 256
processors, respectively.
Table
1 shows that the PSOR on either a strip partition or a block partition has
the same asymptotic rate of convergence as the corresponding sequential SOR method.
Due to the different computing formula of PSOR at different groups of processors, the
relative residual
and the relative error
would be a little different. Here k \Delta k 2 is the L 2 norm, u (1000) is the 1000th iterate of
PSOR, and u is the exact solution on the grid mesh. From the table we also see that
the performance of PSOR on the strip partition is better than that of PSOR on the
block partition. Hence, we only considered the PSOR method on a strip partition in
the reminder of the paper.
We next considered the performance of PSOR on different parallel machines. We
implemented the PSOR method using
which leaded to the following stopping criterion for the PSOR iteration:
Table
The parallel performance of PSOR on either a block partition or a strip partition for solving the
5-point stencil of Poisson equation with
Processor Total Time Comm. Time
strips blocks strips blocks strips blocks strips blocks
Number of Processors (on KSR1)
Time
in
Seconds
Total Time : solid line
Linear Time dotted line
Comp. Time
Other Time
Comm. Time
Number of Processors (on DELTA)
Time
in
Seconds
Total Time the solid line
Linear Time : the dotted line
Comm. Time
Other Time
Comp. Time
Fig. 8. The parallel performance of the
PSOR method for solving (1) with
a KSR1. The floating point performance of the
on the KSR1 is 6.58
Mflops.
Fig. 9. A parallel performance of the PSOR
method for solving (1) with 1on the Intel
Delta. The floating point performance of the SOR
program (i.e. on the Intel Delta is 6.09
Mflops.
Number of Processors (on Paragon L38)
Time
in
Seconds
Total Time the solid line
Linear Time : the dotted line
Comm. Time
Other Time
Comp. Time
Number of Processors (on SP2)
Time
in
Seconds
Total Time the solid line
Linear Time the dotted line
Comm. Time
Other Time
Comp. Time :
Fig. 10. A parallel performance of the PSOR
method for solving (1) with 1on the
Paragon L38. The floating point performance of
the SOR program (i.e. on the Paragon is
7.678 Mflops.
Fig. 11. A parallel performance of the PSOR
method for solving (1) with 1on the SP2.
The floating point performance of the SOR program
(i.e. on the SP2 is 40.63 Mflops.
Number of Processors
Time
in
Seconds
Total Time the solid line
Comm. Time the dotted line
Number of Processors
Time
in
Seconds
Total Time : the solid line
Comm. Time the dotted line
Fig. 12. A comparison of the performances
of the PSOR method on the KSR1 and the Intel
Delta.
Fig. 13. A comparison of the performances
of the PSOR method on the Paragon L38 and the
SP2.
As a result, a lot of computer time in checking the convergence of PSOR could be saved.
The test problem was the model problem (1) with f(x; 1. The same codes in
Pfortran were run on the KSR1 and the Intel Delta, while the same codes in MPI were
run on the Paragon and the SP2. The results were reported in Fig. 8 to Fig.11.
Fig. 8 to Fig. 11 provide the CPU time for the PSOR method on the KSR1, the Intel
Delta, the Paragon and the SP2, respectively, as a function of the number of processors.
The total numbers of PSOR iterations, satisfying the convergence criterion (28), for 1, 2,
4, 8, 16, 32, 64 processors are 1027, 1026, 1025, 1023, 1018, 1008 and 942, respectively.
From the figures we see that Comp: Time is almost the same as Linear Time, and
both Comm: Time and Other Time are very small. These results demonstrated that
the PSOR method is an efficient parallel version of the SOR method using the optimal
relaxation parameter.
Fig. 12 presents a comparison of the performance of PSOR on the KSR1 and the
Intel Delta. From this we see that the PSOR method has largely the same performance
on these two different architecture machines. We also compare the performance of
PSOR on the Paragon and the SP2 in Fig. 13. The SP2 is the latest distributed
memory machine from IBM. From the figure we see that it is more powerful in both
floating operations and interprocessor message passing than the Paragon.
6. Comparison of PSOR with Multicolor SOR. In this section we present a
comparison of the parallel performance of the PSOR method with the multicolor SOR
method on the Paragon. The multicolor SOR method for solving the 5-point stencil (1)
is the Red-Black SOR method, which has been described in Section 2.
We also considered the 9-point approximation to the Poisson equation as follows:
and
@\Omega h .
Red Black Glue Orange
Fig. 14. Four color ordering for the 9-point stencil
Number of Processors (on Paragon)
Time
in
Seconds
Total Time: Solid line
Comm. Time: Dotted line
Red/Black
Number of Processors (on Paragon)
Time
in
Seconds
Total Time: Solid line
Comm. Time: Dotted line
Fig. 15. A Comparison of the parallel perfor-
mation of PSOR with the Red/Black SOR method
Fig. 16. A Comparison of the parallel perfor-
mation of PSOR with the four-color SOR method
for the 9-point stencil
The multi-color SOR method involves four colors for the 9-point stencil. There
are many distinct four-color ordering, but we only considered one of them, which is
illustrated in Fig.14 with ordering R=B=G=O. This ordering has been shown to be
equivalent to the natural ordering in [2].
For the simplicity, we fixed the total number of iterations as 1000 for both PSOR
and the multi-color SOR. We also set for the numerical
experiments. The results were reported in Table 2 and 3. We also draw figures in
Fig.15 and Fig.16 according to the data in Table 2 and 3. Residual term in Table 2 and
3 was computed by using kf \Gamma Au (1000) k 2 . Note that the PSOR method with
reduces to the sequential SOR method. Since the PSOR iteration formula is different
from the different groups of processors, different residuals were obtained. In contrast,
the multicolor SOR had the same residual for all the groups of processors.
Fig.15 and Fig.16 clearly show that PSOR has a much better parallel performance
than the multi-color SOR. Here the Red-Black and four-color SOR methods were programmed
based on a strip partition, which is one of the efficient ways to program them
on a MIMD computer. Due to this, they required two and four steps of interprocessor
Table
A comparison of the parallel performance of PSOR with the Red-Black SOR method for solving
the 5-point stencil of Poisson equation with Paragon). Here the L2-norm of the
residual of the Red-Black SOR was 2:57 \Theta 10 \Gamma5
Processor Total Time Comm. Time Residual (PSOR)
Table
A comparison of the parallel performance of PSOR with the Four-Color SOR method for solving
the 9-point stencil of Poisson equation with Paragon). Here the L2-norm of the
residual of the Four-Color SOR was 4:88 \Theta 10 \Gamma6
Processor Total Time Comm. Time Residual (PSOR)
8 43.2 78.5 1.76 6.5 2:61 \Theta 10 \Gamma6
128 4.4 11.2 1.82 6.52 5:65 \Theta 10 \Gamma6
data communication per iteration, respectively. In contrast, each PSOR iteration only
exchanges interprocessor data once for both 5-point and 9-point stencils. Hence, PSOR
took much fewer CPU time on interprocessor data-communication than the multi-color
SOR. Moreover, our experiments indicated that PSOR also took much few CPU time
in floating operations than the multi-color SOR in either the sequential case (i.e. one
processor case) or the parallel case.
Acknowledgments
. The author would like to thank his advisor Professor L.
Ridgway Scott for valuable discussions and his continuous support. He is also grateful
to Professor Tamar Schlick for her comments and support. Access to the the Delta and
a Paragon L38 from Intel Corporation and an IBM POWERparallel System 9076 SP2
has been provided by the center for Advanced Computing Research at Caltech and the
Theory Center.
--R
A multi-color SOR method for parallel computation
Analysis of the SOR iteration
for the 9-point Laplacian
A Parallel Dialect of Fortran
coloring schemes for the SOR method on local
memory parallel computers.
Parallel SOR Iterative Methods
Farhat: Thesis in Civil Engineering at Berkeley
Determination of stripe structures for finite element matrices
The SOR method on parallel computers.
Solution of partial differential equations on vector and parallel
method on a multiprocessor
Designing Efficient Algorithms for Parallel Computers
Purdue University
Parallel Linear Stationary Iterative Methods
Parallel U-Cycle Multigrid Method
University of Tennessee
Matrix Iterative Analysis
Multisplittings and Parallel Iterative Methods
Mechanics and Engineering
Parallel Multiplicative Smoother and Analysis
Iterative Solution of Large Linear System
--TR | SOR;JSOR;nonmigratory permutation;convergence analysis;PSOR;multicolor SOR;parallel computing |
328127 | Performance of Greedy Ordering Heuristics for Sparse Cholesky Factorization. | Greedy algorithms for ordering sparse matrices for Cholesky factorization can be based on different metrics. Minimum degree, a popular and effective greedy ordering scheme, minimizes the number of nonzero entries in the rank-1 update (degree) at each step of the factorization. Alternatively, minimum deficiency minimizes the number of nonzero entries introduced (deficiency) at each step of the factorization. In this paper we develop two new heuristics: modified minimum deficiency (MMDF) and modified multiple minimum degree (MMMD). The former uses a metric similar to deficiency while the latter uses a degree-like metric. Our experiments reveal that on the average, MMDF orderings result in 21% fewer operations to factor than minimum degree; MMMD orderings result in 15% fewer operations to factor than minimum degree. MMMD requires on the average 7--13% more time than minimum degree, while MMDF requires on the average 33--34% more time than minimum degree. | Introduction
. It is well known that ordering the rows and columns of a matrix
is a crucial step in the solution of sparse linear systems using Gaussian elimination. The
ordering can drastically affect the amount of fill introduced during factorization and hence
the cost of computing the factorization [7, 13]. When the matrix is symmetric and positive
definite, the ordering step is independent of the numerical values and can be performed
prior to numerical factorization. The ideal choice is an ordering that introduces the least
fill, but the problem of computing such an ordering is NP-complete [22]. Consequently,
almost all ordering algorithms are heuristic in nature. Examples include reverse Cuthill-McKee
[5, 6, 8], automatic nested dissection [9], and minimum degree [18].
A greedy ordering heuristic numbers columns successively by selecting at each step a
column with the optimal value of a metric. In the minimum degree algorithm of Tinney
and Walker [21] the metric is the number of nonzero entries (and hence operations) in the
update associated with a column in a right-looking, sparse Cholesky factorization.
The algorithm can be stated in terms of vertex eliminations in a graph representing the
matrix [18]; now the metric translates into the degree of a vertex. Efficient implementations
of minimum degree are due to George and Liu [11, 12]. The minimum degree algorithm
with multiple eliminations (MMD) due to Liu [16] has become the method of choice in
the last decade. Multiple independent vertices are eliminated at a single step in MMD
to reduce the ordering time. Recently, Amestoy, Davis, and Duff [1] have developed the
"approximate minimum degree" algorithm (AMD), which uses an approximation to the
degree to further reduce the ordering time. Berman and Schnitger [4] have analytically
shown that the minimum degree algorithm can, in some rare cases, produce a poor ordering.
However, experience has shown that the minimum degree algorithm and its variants are
Work was supported in part by the Defense Advanced Research Projects Agency under contracts
DAAL03-91-C-0047, ERD9501310, and Xerox-MP002315, and by the Applied Mathematical Sciences Re-search
Program, Office of Energy Research, U.S. Department of Energy under contract DE-AC05-96OR22464
with Lockheed Martin Energy Research Corp., and by the National Science Foundation under grants NSF-
ASC-94-11394 and NSF-CDA-9529459.
y Computer Science and Mathematics Division, Oak Ridge National Laboratory, P. O. Box 2008, Oak
Ridge, TN 37831-6367 (ngeg@ornl.gov).
z 107 Ayres Hall, Department of Computer Science The University of Tennessee, Knoxville, TN 37996-
1301 (padma@cs.utk.edu).
effective heuristics for generating fill-reducing orderings. In fact, only some very recent
separator-based schemes [3, 14, 15] have outperformed MMD for certain classes of sparse
matrices. Some of these new schemes are hybrids that use the minimum degree algorithm
to order some of the columns.
A greedy ordering heuristic that was also proposed by Tinney and Walker [21], but
has largely been ignored, is the minimum deficiency (or minimum fill) algorithm. The
minimum deficiency algorithm minimizes the number of fill entries introduced at each step
of sparse Cholesky factorization (or deficiency in graph terminology). Although the metrics
look similar, the minimum deficiency and minimum degree algorithms are different. For
example, the deficiency could well be zero even when the degree is not. There are two reasons
why the minimum deficiency algorithm has not become as popular as the minimum degree
algorithm [7]. First, the minimum deficiency algorithm is typically much more expensive
than the minimum degree algorithm. Second, it has been believed that the quality of
minimum deficiency orderings is not much better than that of minimum degree orderings [7].
Results by Rothberg [19] (and also by us [17]) demonstrate that minimum deficiency leads
to significantly better orderings than minimum degree. However, current implementations
of the minimum deficiency algorithm require substantially more time than MMD.
In this paper, we develop two greedy heuristics that are less expensive to compute than
minimum deficiency, but compute better orderings than minimum degree on average. The
heuristics are variants of minimum deficiency and minimum degree. In Section 2, we provide
background material and introduce some special notation to help describe our heuristics. In
Section 3 we develop our "modified minimum deficiency"(MMDF) and "modified multiple
minimum degree" (MMMD) heuristics. We also show that the two heuristics can be implemented
using the update mechanism in the "approximate degree" scheme of Amestoy, Davis,
and Duff [1]. In Section 4 we provide empirical results on the performance of MMDF and
MMMD. Section 5 contains some concluding remarks. The remaining part of this section
describes recent related work.
Related work. Rothberg has investigated metrics for greedy ordering schemes based on
approximations to the deficiency [19]. His work and our work [17] were done independently
of each other. 1 Rothberg [19]:
ffl shows that the minimum deficiency algorithm is significantly superior to MMD in
terms of the number of operations required to compute the Cholesky factor,
ffl develops three "approximate minimum fill" (AMF) heuristics based on approximations
to the deficiency, and
ffl concludes that heuristic AMF1 is the best among the three; on the average, AMF1
orderings require 14% fewer operations to factor than MMD orderings.
In our earlier report [17], we:
ffl establish that many of the techniques used in efficient implementations of the minimum
degree algorithm (namely, indistinguishable vertices, mass elimination, and
outmatching) also apply to the minimum deficiency algorithm,
ffl corroborate Rothberg's empirical results establishing the superior performance of
the minimum deficiency metric,
ffl develop our "modified minimum deficiency" (MMDF) and "modified multiple minimum
degree" (MMMD) heuristics, and
ffl show that MMDF (MMMD) orderings require 17% (15%) fewer operations to factor
than MMD (on the average).
It is difficult to compare the results in [17] and [19] directly because the test suites used
1 Raghavan and Rothberg presented their results independently at the Second SIAM Conference on
Sparse Matrices in 1996.
in the two papers are substantially different. The aggregate measures reported in the two
papers are also different. Moreover they were based on performance data obtained from
different sets of initial numberings.
More recently, in a revision of [19], Rothberg and Eisenstat have developed two new
metrics for greedy ordering schemes [20]. Rothberg and Eisenstat [20]:
ffl develop two heuristics "approximate minimum mean fill" (AMMF) and "average
minimum increase in neighbor degree" (AMIND), and
ffl show that AMMF orderings require 22% (median) to 25% (geometric mean) fewer
operations to factor than MMD orderings; AMIND orderings require 20% (median)
to 21% (geometric mean) fewer operations to factor than MMD orderings.
This paper is a shorter version of [17]. The test suite in this paper is substantially
different from that in the original paper. In an attempt to compare the performance of our
heuristics with that of the AMF1 (called AMF in [20]), AMMF, and AMIND heuristics in [19]
and [20], we now use nearly the same test suite as in those two papers. Four of the matrices
in [19] and [20] are proprietary, and therefore are not available to us. To report performance
relative to MMD, we had earlier used the "median of ratios" over 7 initial orderings (6
random orderings and the ordering in which the matrix was given to us). In this paper, we
use the "ratio of medians" over 11 random initial orderings (as in [19, 20]). As we will see
later in the paper, our MMDF and MMMD heuristics produce better orderings than MMD.
The MMDF and MMMD orderings are very competitive with those produced by AMF1,
AMMF, and AMIND. What we see as interesting is the development of five different metrics
that can produce orderings that are significantly better than those produced by MMD. We
had commented in our earlier report [17] that there could well be other relatively inexpensive
greedy strategies that outperform the ones known at that time. The performance of newer
schemes AMMF and AMIND [20] supports our prediction; AMMF seems to be slightly
better than our MMDF. As we discuss in Section 5, we still believe that there may well be
other greedy metrics that perform better than the five developed so far.
2. Implementing Greedy Ordering Heuristics. The efficient implementation of
greedy ordering schemes is based on a compact realization of the graph-model of Cholesky
factorization [18]. In this section, we provide a brief description of elimination graphs and
quotient graphs, and introduce an example, together with some notation used to describe
our greedy heuristics. We also describe minimum degree and minimum deficiency schemes
using quotient graphs.
Throughout, we use terminology common in sparse matrix factorization. The reader is
referred to the book by George and Liu [13] for details.
Elimination graphs and quotient graphs. Sparse Cholesky factorization can be
modeled using elimination graphs [18]. Let G denote an elimination graph. At the beginning,
G is initialized to G 0 , the graph of a sparse symmetric positive definite matrix [13]. At each
step a vertex and its incident edges are removed from G. If x is the vertex removed, edges
are added to G so that the neighbors of x become a clique. Thus cliques are formed as the
elimination proceeds.
A quotient graph [10, 13] is a compact representation of an elimination graph. It
requires no more space than that for G 0 [13]. Unlike the elimination graph, vertices are not
explicitly removed and neither are cliques formed explicitly. Instead vertices are grouped in
"supervertices" and marked as "eliminated" or "uneliminated."
Let G denote the current elimination graph. Let S be the set of vertices that have been
eliminated. Consider the subgraph induced by S in G 0 . This subgraph will contain one or
more connected components (which are also called domains). In the quotient graph, the
vertices in each connected component are coalesced into an eliminated supervertex . Note
that the cliques created by the elimination process in G are easy to identify in a quotient
graph. Each such clique contains all (uneliminated) neighbors of an eliminated supervertex.
It is well known that as the elimination proceeds, some (uneliminated) vertices will
become indistinguishable from each other; that is, they share essentially the same adjacency
structure in the current elimination graph G. Now each set of indistinguishable vertices is
coalesced into an uneliminated supervertex in the quotient graph. Observe that all vertices
in an uneliminated supervertex have the same degree (or deficiency) and hence can be
"mass-eliminated" when the degree (or deficiency) becomes minimum [13, 17]. Furthermore,
vertices in an uneliminated supervertex form a clique.
Z
Z 6
Fig. 1. An example of a quotient graph; X is the most recently eliminated supervertex. Supervertices
enclosed in a curve form a clique; other partial cliques used in MMDF are shown using
dotted curves.
Thus a quotient graph can be viewed as a graph containing two kinds of supervertices:
uneliminated supervertices and eliminated supervertices. Each uneliminated supervertex
is a clique of indistinguishable vertices of the corresponding elimination graph G. Each
eliminated supervertex is a subset of the vertices that have been eliminated from the original
graph G 0 . Vertices in the set of uneliminated supervertices adjacent to the same eliminated
supervertex in the quotient graph form a clique in the elimination graph. For simplicity, we
will say that these uneliminated supervertices form a "clique" in the quotient graph. Two
uneliminated supervertices are said to be "neighbors" in the quotient graph when there is an
edge between them or they are adjacent to the same eliminated supervertex in the quotient
graph. Thus vertices belonging one uneliminated supervertex are adjacent to those of the
other supervertex in the corresponding elimination graph.
An example and some notation. Figure 1 contains an example of a quotient graph.
Eliminated supervertices are represented by rectangles and uneliminated supervertices are
represented by circles. Assume that an uneliminated supervertex has been selected according
to the greedy criterion, and the quotient graph has been transformed. This gives a new
eliminated supervertex in the quotient graph. Denote the new eliminated supervertex by X;
in the remaining part of this paper X will be referred to as the "most recently eliminated
supervertex." Using our convention, both Z 1 and Z 2 are neighbors of Y 1 . Note that the
uneliminated supervertices Y (enclosed by a curve), adjacent to the eliminated su-
pervertex X, form a clique. Two other cliques are fY 1 g.
Observe that the three cliques are not disjoint. Uneliminated supervertices that are enclosed
by a dotted curve (such as Z 2 , Z 3 , Z 4 , and Z 5 ) form what we call a "partial" clique; these
"partial" cliques will be used to describe our heuristics in the next section.
If V is a supervertex (either uneliminated or eliminated) in the quotient graph, we define
as the set of uneliminated supervertices that are neighbors of V . We use N 2 (V ) to
denote the set of uneliminated supervertices that are neighbors of those in N 1 (V ). We use
deg(V ) to denote the degree of an uneliminated supervertex
and the total number of vertices in all supervertices in N 1 (V ).
Minimum degree and deficiency schemes. Recall that a greedy heuristic needs a
metric d(v) for selecting the next supervertex to eliminate. Examples of d() are the degree
(in minimum degree) and the deficiency (in minimum deficiency). In terms of elimination
graphs, a greedy heuristic has the following structure: select a vertex that minimizes d(),
eliminate it from the current elimination graph, form the next elimination graph, and update
the value of the metric for each vertex affected by the elimination. A greedy scheme can also
be described in terms of quotient graphs: select an uneliminated supervertex that minimizes
d(), create a new quotient graph, and update the value of the metric for each uneliminated
supervertex affected by the elimination.
In the minimum deficiency heuristic, updating the deficiency after one step of elimination
may be significantly more time consuming than updating the degree in the minimum
degree algorithm. Consider the example in Figure 1 where X is the most recently eliminated
supervertex. With minimum degree (MMD and AMD) only the uneliminated supervertices
in N 1 (X) need a degree update. However, with minimum deficiency, we need to update the
deficiency of not only supervertices in N 1 (X), but also some of the supervertices belonging
to N 2 (X). Any supervertex in N 2 (X) that is a neighbor of two or more supervertices in
would need a deficiency update. With respect to Figure 1, we would have to update
the deficiency of Z 1 since it is a neighbor of both Y 1 and Ym . Similarly, we would have to
update the deficiency of each of Z 5 , Z 6 , \Delta \Delta \Delta, Z k (each supervertex is a neighbor of both Y 1
Rothberg showed that the true minimum deficiency algorithm (true local fill in [19])
produces significantly better orderings than MMD. We obtained similar results in [17].
However, our implementation of the minimum deficiency algorithm was on the average
slower than MMD by two orders of magnitude [17]. Let X be the most recently eliminated
supervertex. Using the deficiency as the metric but restricting updates to uneliminated
supervertices in V 1 (X) (as in MMD) leads to orderings that are inferior to true minimum
deficiency but still significantly better than MMD. This was observed by Rothberg [19] and
later corroborated by Ng and Raghavan [17]. However, even such a restricted scheme is
more than 40 times slower than MMD [17]. In the next section we describe two relatively
inexpensive but effective heuristics based on modifications to the deficiency and degree.
3. Modified Minimum Deficiency and Minimum Degree Heuristics. We now
describe two heuristics based on approximations to the deficiency and the degree. Both
metrics can be implemented using either the update mechanism in MMD or the faster
scheme in AMD.
Our first heuristic "modified minimum deficiency" (MMDF) is based on a deficiency-like
metric. Consider the example in Figure 1 and assume X is the most recently eliminated
supervertex. We update the values of the metric d() of uneliminated vertices in N
just as in MMD. Consider updating d(Y 1 ) in Figure 1. An upper bound ffi on
the deficiency of Y 1 can be obtained in terms of the degree of Y 1 . The true deficiency of
Y 1 is obtained by subtracting from the upper bound the number of edges that are present
before Y 1 is eliminated. Identifying all such edges requires examining the uneliminated
supervertices in N 1 (Y 1 ) and N 2 (Y 1 ). However, some of these edges can be identified easily
because in the quotient graph representation, uneliminated supervertices connected to a
common eliminated supervertex form a clique. Using notation introduced earlier, N 1 (Y 1
is the set of uneliminated neighbors of Y 1 . The elements of
can be grouped into a set of disjoint "partial" cliques K. The obvious member of K is
consisting of the uneliminated neighbors of the eliminated supervertex X. The
other partial cliques depend on the order in which the neighbors of Y 1 are examined. Without
loss of generality, assume fZ 1 g forms the second clique. Likewise let fZ be the
next partial clique we examine. Finally, fZ is the fourth disjoint partial clique.
The metric d(Y 1 ) is defined as is an upper bound on deficiency, C is the sum
of contributions from partial cliques, and ct is the correction term. At initialization,
is set to the upper bound ffi . We define ct below.
The upper bound ffi is based on the external degree [16]:
This upper bound favors larger supervertices. If
two supervertices have the same degree, the larger supervertex will have a smaller
upper bound on deficiency since its external degree is smaller.
K be the set of disjoint partial cliques as described above. We define
is a partial clique in K; the size of V is the total
number of vertices in all uneliminated supervertices that constitute V .
ct : The correction term ct takes into account contributions missed because (1) partial
cliques in K are forced to be disjoint, and (2) cliques such as fZ 1 ; Ym g which
are not detected because we do not examine N 2 (Y 1 ). Our heuristic value of ct is
j. The rationale for the choice of ct is as follows. Assume that
each supervertex in N 1 (Y 1 ) is connected to one other supervertex (in
that the associated contribution has been missed. Assume further that the size of
Y 1 is representative of the sizes of supervertices in N 1 (Y 1 ); then the contributions
that have been missed equal
which simplifies to ct =
We would like to emphasize that MMDF is heuristic. We see the correction term as an
approximation to edges missed because we restrict our attention to partial cliques that are
disjoint. In our experiments we found that small multiples of the correction term behaved
just as well if not better.
Our second heuristic "modified multiple minimum degree" (MMMD) attempts to use
a metric that is bounded by a small multiple of the degree. As indicated by the name it
is a close variant of the minimum degree algorithm with multiple eliminations. Consider
Figure
1 and once again assume X is the most recently eliminated supervertex. For the
uses the metric d(Y 1 U is the size of the
largest partial clique in the set K (described above). More precisely,
initialization we simply use 2 edeg(Y 1 ). Note that MMMD differs from MMD only in the
definition of the metric. The metric in MMMD tries to take into account contributions from
the largest clique that contains Y 1 .
The disjoint partial cliques in the set K of Y 1 are exactly those used implicitly in Liu's
MMD code to compute edeg(Y 1 ). Hence the update cost of MMDF and MMMD is similar
to that of Liu's MMD.
Approximate MMDF and MMMD. We now briefly outline how "approximate"
versions of the two schemes can be implemented using the faster update mechanism in the
AMD scheme of Amestoy, Davis, and Duff [1].
Consider "approximate" MMDF. Consider once again Figure 1 and the metric for Y 1 ,
an uneliminated supervertex adjacent to X, the most recently eliminated supervertex. The
upper bound is now calculated using the approximate external degree of AMD. The correction
term can also be easily calculated in terms of this approximate external degree and
the size of supervertex Y 1 . The main difference is in how K is constructed, and hence the
term C. Now the set K corresponds to the cliques used in AMD to compute an approximation
to the degree. AMD uses the sizes of certain cliques of supervertices in the set
g. With respect to the example in Figure 1, the cliques
used in AMD are: fY g. The first clique is
the one formed by elimination of X; the remaining cliques have no overlap with this clique.
However, the remaining cliques may have supervertices in common. MMDF based on MMD
forces the partial cliques to be disjoint. On the other hand, approximate-MMDF relaxes
this restriction, i.e., it uses the cliques in AMD and these cliques may have common une-
liminated supervertices. The approximation to C (the contribution to the deficiency from
the partial cliques) is computed using the clique sizes used in AMD (for the approximation
to the degree). Approximate-MMMD is similar; it also uses the cliques in AMD.
Relation to other deficiency-like schemes. We would like to note that MMDF
is similar to AMF3, proposed by Rothberg [19]; it differs mainly in the way in which the
partial cliques are constructed, as well as in the definition of the correction term. AMF
("approximate minimum fill"), AMMF ("approximate mean minimum fill") and AMIND
("approximate mean increase in neighbor degree") are three other heuristics developed by
Rothberg and Eisenstat [19, 20]. The deficiency-like metrics in AMF, AMMF, and AMIND
use only edges in the most recently formed clique while the metric in MMDF takes into
account edges in as many cliques as we can "easily identify." AMIND also uses a term
which is similar to our correction term in MMDF. MMMD is similar to AMF in that it uses
only the size of a single clique but it differs in the sense that it uses a degree-like metric.
4. Performance of MMDF and MMMD. We now report on the performance of
MMDF and MMMD. We use a set of 36 test matrices in our empirical study. Our test suite
is a subset of the one used by Rothberg and Eisenstat [19, 20]; their test suite contains four
other matrices that are proprietary and hence are not available to us. Our MMD code is
Liu's Fortran implementation converted to C. Our MMDF and MMMD heuristics are built
using the MMD code. MMDF differs from MMD in the metric update as well in the use of
heaps to store and retrieve the metric. Furthermore, unlike MMD, MMDF does not allow
"multiple eliminations." MMMD is nearly identical to MMD and differs only in the metric
update portion. All our experiments were performed on a Sun Ultra Sparc-2 workstation.
The quality of greedy orderings can vary depending on the initial numbering. For
each test matrix, we use 11 different random initial numberings for MMD, MMDF, and
MMMD. We consider two quantities for the quality of ordering: the number of nonzeros in
the Cholesky factor, and the number of floating-point operations required to compute the
We also report actual execution times for MMD, MMDF, and MMMD.
The characteristics of the test matrices and the quality of MMD orderings are reported in
Table
1. We report mean and median values over 11 initial random numberings for MMD
in
Table
1.
Table
2 shows the performance of MMDF and MMMD relative to that of MMD. The
relative measure is computed as the ratio of the medians over 11 initial random numberings.
We also present the geometric mean and the median over all test matrices in the last two
lines of the table. The execution time of MMMD matches that of MMD, while MMDF
Performance of MMD; jLj and operations are mean and median values over 11 initial orderings.
Problem rank jAj mean median
Time jLj Operations Time jLj Operations
bcsstk36 23052 1166.2 1.56 2760 609 1.56 2761 609
crystk01 4875 320.7 0.46 1082 337 0.46 1083 338
crystk03 24696 1775.8 3.92 13943 13050 3.92 13944 13051
gearbox 153746 9234.1 40.52 52972 57327 40.52 52973 57328
msc10848 10848 1240.6 1.07 2028 576 1.07 2028 576
pwt 36519 326.1 2.51 1768 224 2.51 1768 225
struct2 73752 3670.9 7.74 9810 3817 7.74 9810 3817
struct3 53570 1227.3 6.36 5309 1215 6.36 5309 1216
struct4 4350 242.1 2.07 2248 1756 2.07 2248 1756
troll 213453 1198.5 42.37 61171 153228 42.37 61171 153228
Performance of MMDF and MMMD relative to MMD. For each problem we report the ratio of
median values over 11 initial random orderings.
Problem Ordering time jLj Operations
MMDF MMMD MMDF MMMD MMDF MMMD
bcsstk36 1.68 1.47 1.01 1.00 0.98 0.96
crystk01 1.33 1.22 0.91 0.89 0.80 0.77
crystk02 1.23 1.04 0.86 0.83 0.72 0.68
crystk03 1.25 1.04 0.89 0.80 0.79 0.64
flap 1.50 1.06 0.93 0.91 0.81 0.78
gearbox 1.58 1.13 0.91 1.00 0.77 1.17
pwt 1.48 1.03 0.94 0.97 0.85 0.92
struct1 1.72 1.08 0.95 0.96 0.85 0.90
struct2 1.80 1.06 0.97 0.98 0.94 0.93
struct3 1.28 1.05 0.95 0.96 0.87 0.90
troll 1.13 1.04 0.80 0.85 0.65 0.73
g-mean 1.33 1.13 0.91 0.92 0.80 0.84
median 1.34 1.07 0.92 0.95 0.80 0.86
requires on average an overhead of 34% over MMD. Our experiments indicate that avoiding
the use of heaps in MMDF (as suggested by a referee) will reduce about a third of this
overhead.
Results for variants of MMDF and MMMD are summarized in Table 3. The approximate
versions of MMDF and MMMD were based on our implementation of AMD
and did not include features such as "aggressive absorption." The approximate version of
MMDF/MMMD performs equally well; the geometric mean and the median are the same as
those for MMDF/MMMD. For both MMDF and MMMD (and their approximate versions),
adding Ashcraft's initial compression step [2] improved the performance slightly (1% on the
average).
Table
Summary of performance of MMDF and MMMD variants relative to MMD. The geometric-mean
and median over all problems in the test suite is based on the ratio of median values over 11 initial
random orderings for each problem.
Method jLj Operations
g-mean median g-mean median
initial compression
With initial compression
5. Conclusions. We have developed two new greedy heuristics: "modified minimum
deficiency" (MMDF) and "modified multiple minimumdegree" (MMMD). Both these schemes
produce orderings that are better than MMD orderings. The first scheme MMDF produces
orderings that require approximately 21% fewer floating-point operations for factorization
than MMD, while the second scheme MMMD generates orderings that incurs 15% fewer
operations for factorization than MMD. MMDF uses a deficiency-like metric, i.e., a metric
whose value is a quadratic function of the degree. The execution time of MMDF is approximately
1:3 times that of MMD. On the other hand, MMMD uses a degree-like metric, which
is bounded above by twice the value of the degree. MMMD is the same as MMD but for the
difference in the choice of the metric. Consequently, the ordering time of MMMD is very
similar to that of MMD. Furthermore, there is no change in the quality of the orderings
when MMDF (MMMD) is implemented using "approximate degree" [1] framework.
For completeness, Table 4 summarizes the performance of our schemes, as well as
those in [20]. It appears that the performance of MMMD and "approximate minimum
fill" (AMF1 [19] and AMF [20]) are similar. Likewise, MMDF and "approximate mean
increase in neighbor degree" (AMIND [20]) produce orderings of similar quality. "Approxi-
mate mean minimum fill" (AMMF [20]) appears to be slightly better than MMDF. Relative
to MMD, AMMF orderings require 25% (geometric mean) to 22% (median) fewer operations
for factorization, while MMDF (with compression) orderings require 21% (geometric mean
and median) fewer operations for factorization.
Table
Summary of operation counts to factor for MMDF, MMMD, AMF, AMMF, and AMIND relative
to MMD (with initial compression).
Measure Ng-Raghavan Rothberg-Eisenstat
MMDF MMMD AMF1 AMMF AMIND
g-mean .79 .84 .85 .75 .79
median .79 .85 .85 .78 .80
Our work is an attempt to understand factors affecting the performance of greedy ordering
heuristics. We tried several metrics that are close to those in MMDF and MMMD.
Many of these had average operation counts for factorization similar to those reported for
MMDF and MMMD, while others varied substantially. We also experimented with a variant
of MMDF that did update the metric for "neighbors of neighbors" as in true minimum
deficiency. Surprisingly, the operation counts were on the average higher by 3-4% for this
variant. The performance of true minimum deficiency shows that deficiency is a better metric
than the degree. However, we surmise that the improved performance of our heuristics
is from the complicated interplay of the metric and the greedy process, and not necessarily
from accurately modeling the true deficiency. We conjecture that there could well be
other relatively inexpensive greedy strategies that significantly outperform the ones known
so far.
--R
An approximate minimum degree ordering algorithm
Compressed graphs and the minimum degree algorithm
Robust orderings of sparse matrices using multisection
On the performance of the minimum degree ordering for Gaussian elimination
Several strategies for reducing bandwidth of matrices
Reducing the bandwidth of sparse symmetric matrices
Direct Methods for Sparse Matrices
Computer Implementation of the Finite Element Method
An automatic nested dissection algorithm for irregular finite element problems
Fast and effective algorithms for graph partitioning and sparse matrix or dering
Improving the runtime and quality of nested dissection ordering
Modification of the minimum degree algorithm by multiple elimination
Performance of greedy ordering heuristics for sparse cholesky fac- torization
A graph-theoretic study of the numerical solution of sparse positive definite systems of linear equations
Ordering sparse matrices using approximate minimum local fill.
Node selection strategies for bottom-up sparse matrix ordering
Direct solution of sparse network equations by optimally ordered triangular factorization
Computing the minimum fill-in is NP-complete
--TR
--CTR
Abdou Guermouche , Jean-Yves L'Excellent , Gil Utard, Impact of reordering on the memory of a multifrontal solver, Parallel Computing, v.29 n.9, p.1191-1218, September
Patrick R. Amestoy , Iain S. Duff , Stphane Pralet , Christof Vmel, Adapting a parallel sparse direct solver to architectures with clusters of SMPs, Parallel Computing, v.29 n.11-12, p.1645-1668, November/December
Timothy A. Davis , John R. Gilbert , Stefan I. Larimore , Esmond G. Ng, A column approximate minimum degree ordering algorithm, ACM Transactions on Mathematical Software (TOMS), v.30 n.3, p.353-376, September 2004
Timothy A. Davis, A column pre-ordering strategy for the unsymmetric-pattern multifrontal method, ACM Transactions on Mathematical Software (TOMS), v.30 n.2, p.165-195, June 2004 | minimum degree;sparse matrix ordering;greedy heuristics;minimum deficiency |
328138 | Block Stationary Methods for Nonsymmetric Cyclically Reduced Systems Arising from Three-Dimensional Elliptic Equations. | We consider a three-dimensional convection-diffusion model problem and examine systems of equations arising from performing one step of cyclic reduction on an equally spaced mesh, discretized using the seven-point operator. We present two ordering strategies and analyze block splittings of the resulting matrices. If the matrices are consistently ordered relative to a given partitioning, Young's analysis for the block Gauss--Seidel and block SOR methods can be applied. We compare partitionings for which this property holds with ones where the matrices do not have Property A yet still give rise to an efficient solution process. Bounds on convergence rates are derived and the work involved in solving the systems is estimated. | Introduction
. Consider the three-dimensional (3D) convection-di#usion equation
with constant coe#cients
y, z)
on the unit
subject to Dirichlet-type boundary con-
ditions. We focus on applying seven-point finite di#erence discretizations, for example
centered di#erences to the di#usive terms, and centered di#erences or first-order
upwind approximations to the convective terms. Let us define n and h so that n 3 is
the number of unknowns and is the mesh size, and let F denote the
corresponding di#erence operator, after scaling by h 2 , so that for a gridpoint u i,j,k
not next to the boundary we have
F
If we denote the mesh Reynolds numbers by
-h,
then the values of the components of the computational molecule are given by
# Received by the editors March 3, 1997; accepted for publication (in revised form) by Z. Strako-s
February 27, 1998; published electronically July 9, 1999.
http://www.siam.org/journals/simax/20-4/31771.html
Gates Building, Stanford University, Stanford, CA 94305 (greif@
sccm.stanford.edu).
# Department of Computer Science, University of British Columbia, Vancouver, BC, V6T 1Z4,
Canada (varah@cs.ubc.ca).
(a) lexicographic ordering (b) red/black ordering
Fig. 1.1. Sparsity patterns of the matrices corresponding to two possible orderings of the unknown
if centered di#erence approximations of the first derivatives are used and by
if backward first-order accurate schemes are used.
The sparsity pattern of the underlying matrix depends on the ordering of the
unknowns. In Fig. 1.1 the sparsity patterns associated with two possible ordering
strategies are illustrated. The natural lexicographic ordering in (a) is one where the
unknowns are numbered rowwise and then planewise. The red/black ordering in (b)
means we color the gridpoints using two colors, in a checkerboard fashion, and then
number all the points that correspond to one of the colors first.
As is evident from Fig. 1.1(b), if we split the matrix into four blocks of the same
size, we can see that the two diagonal blocks are diagonal matrices. This means that
the matrix has Property A [24]. A cheap and relatively simple process of elimination
of all the points that correspond to one color (say, red) leads to a smaller system of
equations, whose associated matrix is the Schur complement of the original matrix,
and is still fairly sparse. This procedure amounts to performing one step of cyclic
reduction. Notice that in general both the original and the reduced matrices are
nonsymmetric.
The cyclic reduction step can be repeated until a small system of equations is
obtained, which can then be solved directly. This procedure is called complete cyclic
reduction. It has been studied in several papers, mainly for symmetric systems arising
from two-dimensional (2D) self-adjoint elliptic problems. A general overview of the
algorithm and a list of references can be found in [12]. Early papers that present and
analyze the algorithm are those of Hockney [17], Buneman [2], and Buzbee, Golub,
and Nielson [4]. Buzbee et al. [3] use cyclic reduction for solving the Poisson equation
on irregular regions; Concus and Golub [6] discuss 2D nonseparable cases. Application
of cyclic reduction to matrices with arbitrary dimensions is done by Sweet [20], [21].
presents a fast O(n 2 ) algorithm and discusses its stability and e#ciency.
One step of cyclic reduction for symmetric positive definite systems is analyzed
by Hageman and Varga in [16] and later by Hageman, Luk, and Young [15], where it
is shown that the reduced solver generally converges faster than the unreduced solver.
In [1], Axelsson and Gustafsson use cyclic reduction in conjunction with the conjugate
gradient method. Elman and Golub have conducted an extensive investigation for 2D
elliptic non-self-adjoint problems [8], [9], [10] and have shown that one step of cyclic
reduction leads to systems with several valuable properties, such as symmetrizability
for a large set of the underlying PDE coe#cients, which is e#ectively used to derive
bounds on the convergence rates of iterative solvers, and fast convergence.
Preliminary analysis for the non-self-adjoint 3D model problem (1.1) has been
done by the authors in [14], where one step of 3D cyclic reduction has been described
in detail, and a block Jacobi solver has been analyzed, which is based on a certain
block splitting (referred to as 1D splitting throughout this paper), in conjunction with
what we called a two-plane ordering strategy.
The computational molecule of the reduced operator consists of 19 points, located
on 5 parallel planes. Let R denote the reduced di#erence operator, after scaling by
ah 2 . Then for an interior gridpoint, u i,j,k , we have
R u
-2cf
The following results hold for any ordering strategy. See [14] for the proofs, which
have been obtained by using the techniques of Elman and Golub [8], [9].
Theorem 1.1. The reduced matrix can be symmetrized by a real diagonal similarity
transformation if and only if the products bcde, befg, and cdfg are positive.
Theorem 1.2. If be, cd, fg > 0, then both the reduced matrix and the symmetrized
reduced matrix are diagonally dominant M-matrices.
In this paper our purpose is to extend the analysis initiated in [14] and examine
block stationary methods as solvers for the reduced system. In section 2 we present the
ordering strategies that are examined. In section 3 two block splittings are presented,
and bounds on convergence rates are derived. In section 4 we analyze the reduced
system in the context of consistently ordered matrices. In section 5 the amount of
computational work involved in solving the linear systems is estimated, a comparison
of the reduced system with the unreduced system is conducted, and some numerical
results which validate our analysis and illustrate the fast convergence of the reduced
system are given. Finally, in section 6 we conclude.
2. Orderings for the reduced system. We consider two ordering strategies
for the reduced grid. The two-plane ordering has been described in detail in [14]. It
corresponds to ordering the unknowns by gathering blocks of 2n gridpoints from two
horizontal lines and two adjacent planes. This ordering strategy is depicted in Fig.
2.1(a). In the figure, the numbers are the indices of the gridpoints, which are to be
expressed below by # in (2.1) and (2.15).
The connection between the index of a gridpoint, #, and its coordinate values
(i, j,
h ) is given below. The term fix is borrowed from MATLAB and
means rounding to the nearest integer toward zero.
x
y
17 192123252729316(a) two-plane (b) two-line
Fig. 2.1. Two suggested ordering strategies for the reduced grid (in the figure the unreduced grid
is of size 4 - 4 - 4).
(2.1c)
See [14] for specification of the matrix entries.
An alternative to the two-plane ordering is a straightforward generalization to
three dimensions of the two-line ordering used by Elman and Golub in [9]. It is illustrated
in Fig. 2.1(b). The reduced matrix for this ordering strategy is block pentadiagonal
Each S i,j is (n 2 /2) - (n 2 /2) and is a combination of n
uncoupled matrices, each of
size n - n.
The diagonal matrices {S j,j } are themselves block tridiagonal. Each submatrix is
of size n - n and its diagonal block is
stands for the value along the
main diagonal of the matrix, which is given by a 2
for gridpoints not
next to the boundary. See [14] for specification of the main diagonal's values associated
with gridpoints next to the boundary.
For the superdiagonal and the subdiagonal blocks of the matrices S j,j we have
the following irregular tridiagonal structure, which depends on whether j is even or
odd. The superdiagonal matrices are given by
-2de
-2ce -e 2
-2de .
-2ce -e 2
-2de
1042 CHEN GREIF AND JAMES VARAH
if j is odd or
-2ce -e 2
-2de
-2ce -e 2
if j is even.
The subdiagonal matrices are
-2bd
if j is odd or
-2bd
-2bd .
-2bd
if j is even.
The superdiagonal and the subdiagonal blocks of S, S j,j-1 are block tridiagonal:
(a) two-plane (b) two-line
Fig. 2.2. Sparsity patterns of the reduced matrices associated with the two ordering strategies
(the matrices correspond to 6 - 6 - 6 grids). Each square corresponds to an n 2
gridpoints form two coupled planes in the reduced grid).
Finally, the matrices S j,j-2 and S j,j+2 are diagonal:
2.
The connection between the gridpoint's index and its coordinate values is given
by
(2.15c)
The sparsity patterns of the matrices corresponding to two-plane ordering and
two-line ordering are depicted in Fig. 2.2.
(a) 1D splitting (b) 2D splitting
Fig. 3.1. Sparsity patterns of the block diagonal matrices associated with the block Jacobi split-
ting, for the two suggested block splittings, using the two-plane ordering strategy.
3. Block splittings and bounds on convergence rate. For the two ordering
strategies presented in section 2 the matrices can be expressed as block tridiagonal,
of the form
S is an (n 3 /2) - (n 3 /2) matrix. In the case of two-plane ordering, each block S i,j is
of size n 2
and is block tridiagonal with respect to 2n - 2n blocks. In the case of
two-line ordering, each block S i,j is of size (n 2 /2) - (n 2 /2) and is block tridiagonal
with respect to n - n blocks.
In solving the reduced system using a stationary method, various splittings are
possible. We consider two obvious ones, based on dimension. We use the term 1D
splitting for a splitting which is based on partitioning the matrix into O(n) blocks
(2n-2n blocks for the two-plane ordering and n-n blocks for the two-line ordering).
A 2D splitting is one which is based on partitioning the matrix into O(n 2 ) blocks
blocks for the two-plane ordering and (n 2 /2) - (n 2 /2) blocks for the two-line
ordering). Notice that the 1D splitting for both ordering strategies is essentially
associated with blocks of gridpoints that are x-oriented. However, the 2D splitting
for the two-line ordering corresponds to x-y oriented planes of gridpoints, whereas
for the two-plane ordering it corresponds to x-z oriented planes of gridpoints. (These
observations can be deduced by referring to Fig. 2.1.) Di#erent orientations can be
obtained by simply reordering the unknowns so that the roles of x, y, and z are
interchanged.
The sparsity patterns of the block diagonal parts of the splittings associated with
the block Jacobi scheme are depicted in Fig. 3.1.
We now compare the orderings. We have the following useful result.
Theorem 3.1. If be, cd, fg > 0, then for the 1D splitting the spectral radius of the
Jacobi iteration matrix associated with two-plane ordering is smaller than the spectral
radius of the iteration matrix associated with the two-line ordering.
Proof. By Theorem 1.2 the matrices are M-matrices. Each ordering strategy
produces a matrix which is merely a symmetric permutation of a matrix associated
with the other ordering. Suppose splitting of the two-plane
ordering matrix and S is a 1D splitting for the two-line order-
ing. There exists a permutation matrix P such that P T Consider the
splitting P T . It is straightforward to show by examining
the matrix entries that P T . The latter are both nonnegative matrices;
therefore by [23, Thm. 3.15] it follows that 0 < #(M
The same result applies to 2D splitting, provided that the orientation of the planes
of gridpoints is identical for both ordering strategies. The proof for this is identical
to the proof of Theorem 3.1.
The results indicated in Theorem 3.1 can be observed in Fig. 3.2. It is interesting
to observe that the superiority of the two-plane ordering carries over to the case
be, cd, fg < 0, which corresponds to the region of mesh Reynolds numbers larger
than 1 (for which the PDE is considered convection-dominated). We remark, however,
that the amount of computational work per each iteration is somewhat higher for the
system which corresponds to two-plane ordering. In Fig. 3.2 a few cross sections of
mesh Reynolds numbers are examined. For example, graph (a) corresponds to flow
with the same velocity in x, y, and z directions. Graph (b) corresponds to flow only
in x and y directions, and no convection in z direction, and so on. (See (1.3) for
definitions of #, and #.)
We now derive bounds on convergence rates. Below we shall attach the subscripts
1 and 2 to matrices associated with the 1D splitting and 2D splitting, respectively.
Since two-plane ordering gives rise to a more-e#cient solution procedure than two-line
ordering, we focus on it.
Denote the two splittings for the block Jacobi scheme by
In [14] we have shown that if be, cd, and fg have the same sign then a real diagonal
nonsingular symmetrizer can be found, and thus (since the symmetrizer is diagonal)
the sparsity patterns of the original nonsymmetric matrix and the symmetrized matrix
are identical. Let us attach the hat sign to a matrix to denote application of the
similarity transformation that symmetrizes it. That is, for a given matrix X and a
diagonal symmetrizer Q, Q -1 XQ is to be denoted by -
X.
The matrices -
are similar to the original iteration matrices
respectively, and thus have the same spectral radii. Following
Elman and Golub's strategy [8], [9], the symmetric matrix can be handled more easily
as far as computing the spectral radius is concerned, since we can use the following:
2.
The results presented below are for the case be, cd, fg > 0, using two-plane
ordering. These conditions are equivalent to |#| < 1 if centered di#erences
are used to discretize the convective terms. No restriction on the magnitude of the
mesh Reynolds numbers is imposed if upwind di#erences are used. For these values
tight bounds for the spectral radius of the iteration matrix can be obtained.
For -
D 1 the minimal eigenvalue has been found in [14, Thm. 3.8], and the relevant
part of this theorem is quoted below.
Proposition 3.2. The minimal eigenvalue of -
D 1 is
A lower bound for -
D 1 is given by the following proposition.
Fig. 3.2. Spectral radii of iteration matrices versus mesh Reynolds numbers for the block Jacobi
scheme, using 1D splitting and centered di#erence discretization. The broken lines correspond to
two-plane ordering. The solid lines correspond to two-line ordering.
Proposition 3.3. The minimal eigenvalue of -
D 1 is bounded from below by
-#, where
Table
Comparison between the computed spectral radius and the bound, for the 2D splitting, with
Scheme Upwind Centered
n # bound ratio # bound ratio
The proof for this part follows from [14, Lem. 3.10], where it is shown that the
spectral radius of -
D 1 is bounded by #.
Combining Propositions 3.2 and 3.3, and applying Rayleigh quotients to the matrices
we obtain the following lemma.
Lemma 3.4. The minimal eigenvalue of -
D 2 is bounded from below by #, where
# and # are the expressions given in (3.3) and (3.4).
The bound for -
C 2 can be obtained by combining [14, Lems. 3.11-3.13], as follows.
Lemma 3.5. The spectral radius of the matrix -
C 2 is bounded by
# .
Finally, Lemmas 3.4 and 3.5 lead to the following theorem.
Theorem 3.6. The spectral radii of the iteration matrices D
are bounded by #
# and #
respectively, where #, and # are defined in (3.3),
(3.4), and (3.5), respectively.
Corollary 3.7. If be, cd, fg > 0 then the block Jacobi iteration converges for
both the 1D and 2D splittings.
Proof. For this we can use Varga's result on M-matrices [23, Thm. 3.13]. Alter-
natively, Taylor expansions of the bounds given in Theorem 3.6 are given by
and
and thus are smaller than 1.
In
Table
3.1 we give some indication on the quality of the bound for the 2D
splitting. Results with a similar level of accuracy have been obtained and presented
in [14] for the 1D splitting. As can be observed, the bounds are tight and become
tighter as n increases, which suggests that they are asymptotic to the spectral radii.
We now discuss other stationary methods, namely, Gauss-Seidel and SOR. Relative
to a given partitioning, if the reduced matrix is consistently ordered, then it
is straightforward to apply Young's analysis, and the bounds in Theorem 3.6 can be
used for estimating the rate of convergence of the Gauss-Seidel and SOR schemes.
The reader is referred to [19, Defs. 4.3 and 4.4] for definitions of Property A and consistent
ordering. As stated in [19], a matrix that is consistently ordered has Property
1048 CHEN GREIF AND JAMES VARAH
conversely, a matrix with Property A can be permuted so that it is consistently
ordered. We mentioned in the introduction that the matrix of the unreduced system
has Property A. For the reduced system, we have the following observations.
Proposition 3.8. The reduced matrix associated with two-line ordering, SL , does
not have Property A relative to 1D or 2D partitionings.
Proof. Let S i,j denote the (i, j)th n-n block of SL , and let Q be an (n 2 /2)-(n 2 /2)
matrix, whose entries satisfy q be an
the (i, j)th (n 2 /2) - (n 2 /2) block submatrix of
S is nonzero, and t is a pentadiagonal matrix and thus
does not have Property A. Since Q can be referred to as a partitioning of T into
also does not have Property A.
Proposition 3.9. The reduced matrix associated with two-plane ordering, SP ,
does not have Property A relative to 1D partitioning.
Proof. Let S i,j denote the (i, j)th 2n - 2n block of SP , and let Q be an (n 2
whose entries satisfy q otherwise. It is
straightforward to see that the nonzero pattern of Q is identical to that of the matrix
associated with using a nine-point operator for a 2D grid. Since the latter does not
have Property A relative to partitioning into 1 - 1 matrices, the result follows.
On the other hand, we have the following proposition.
Proposition 3.10. The reduced matrix associated with two-plane ordering, SP ,
has Property A and, moreover, is consistently ordered relative to 2D partitioning.
Proof. The matrix is block tridiagonal relative to this partitioning [24].
For the SOR scheme we have the following result, which is completely analogous
to Elman and Golub's result for the 2D problem [9, Thm. 4].
Theorem 3.11. Let L# denote the block SOR operator associated with 2D splitting
and using two-plane ordering. If either be, cd, fg > 0 or < 0, then the choice
minimizes #(L# ) with respect to #, and #(L# - 1.
The proof of this theorem is essentially identical to the proof of Elman and Golub
in [9, Thm. 4] and follows from Young [24, Chap. 14, Sects. 5.2 and 14.3]. The algebraic
details on how to pick the signs of the diagonal symmetrizer so that the symmetrized
block diagonal part of the splitting is a diagonally dominant M-matrix are omitted.
That #(D -1
known by Corollary 3.7. The reduced matrix is consistently
ordered by Proposition 3.10.
A way to approximately determine an optimal relaxation parameter for the case
be, cd, fg > 0 is to replace #(D by the bound for it (given in Theorem 3.6) in
the expression for # in Theorem 3.11. If the bound for the block Jacobi scheme is
tight, then the estimate of # is fairly accurate.
Proposition 3.12. Suppose be, cd, fg > 0. For the system associated with 2D
splitting and for h su#ciently small, the choice
approximately minimizes #(L# ). The spectral radius of the iteration matrix is approximately
- 1.
The Taylor expansion of the estimate for the optimal relaxation parameter is
given by
Fig. 4.1. The sparsity pattern of the matrix C d .
From (3.9) it follows that the estimated asymptotic rate of convergence of the block
SOR scheme is approximately the second term in (3.9) (with the negative sign re-
moved) and is thus O(h).
4. Near-Property A for 1D splitting of the two-plane matrix. Although
the matrix associated with two-plane ordering does not have Property A relative to
the 1D partitioning, some interesting observations can be made: As before, let {S i,j }
denote the n 2
blocks of the reduced matrix. Each block S i,j is a block tridiagonal
matrix relative to 2n - 2n blocks. We attach superscripts to mark how far a block
diagonal is from the main block diagonal, and we define
See [14] for specification of the entries of these matrices. As in section 3 (with a slight
change in notation), let C be the 1D splitting of the matrix, and define -
so that
C +C d ). The matrix D-
C has Property A, but SP does not. Let
us examine the matrix that prevents SP from having Property A, namely, C d . It is
an extremely sparse matrix, and the magnitude of the nonzero values in this matrix
is bounded by 2 if be, cd, fg > 0. The nonzero pattern of C d is depicted in Fig. 4.1.
We wish to estimate how far the reduced matrix SP is from having block Property
A, relative to the 1D partitioning. Let us denote the upper part and the lower part of
C d by U d and L d , respectively, and let -
U and -
L be the upper part and lower part of
C, respectively. Then the spectral radius of the block Gauss-Seidel matrix satisfies
significantly larger than the other norms in the above inequality,
which means that the spectral radius of the Gauss-Seidel iteration matrix associated
with the two-plane ordering can be estimated by replacing the two-plane matrix by
C, which does have Property A and thus is easier to analyze. Alternatively, the
following observation has been obtained by numerical experiments:
(a) #GS vs # 2
centered (b) #GS vs # 2
a
Fig. 4.2. "Near Property A" for the 1D splitting.
Young's analysis can be applied directly to both D -
C and D - C d (both have
Property A), and thus an approximate relationship between the eigenvalues of the
block Jacobi iteration matrix and the eigenvalues of the block Gauss-Seidel iteration
matrix can be obtained.
For be, cd, fg > 0 we have observed that the spectral radius of the block Jacobi
iteration matrix satisfies
J #GS .
The first two graphs in Fig. 4.2 illustrate this phenomenon numerically. The broken
lines in graphs (a) and (b) correspond to the square of the spectral radius of the
iteration matrix associated with block Jacobi, for a 256 - 256 matrix. The solid lines
correspond to the spectral radius of the block Gauss-Seidel iteration matrix. As can
be seen, the curves are almost indistinguishable. This phenomenon becomes more
dramatic as the systems become larger.
Some analysis can be done using Varga's work on extensions of the theory of p-
cyclic matrices [22], [23, Sect. 4.4]. (In this paper we are concerned only with
Recall [23, Def. 4.2], which defines a set S of matrices as follows. The square matrix
satisfies the following properties:
diagonal entries.
2. B is irreducible and convergent, i.e., 0 < #(B) < 1.
3. B is symmetric.
If be, cd, fg > 0, the reduced matrix SP is a diagonally dominant M-matrix
which can be symmetrized, and -
D 1/2 is well defined. Define -
I -
D -1/2 .
Applying block Jacobi to the original reduced system is analogous to applying
point Jacobi to -
S, in the sense that the spectra of the iteration matrices associated
with both systems are identical. The iteration matrix associated with -
S is
D -1/2 . Showing that the matrix B belongs to the set S defined above is
easy and is omitted. Let L be the lower part of B. Define MB (#L
and mB (#(MB (#)). Let
#(B)# 1/2
Then we have [23, Thm. 4.7] (with a slight modification so as to match the
terminology used in this paper), as follows.
Theorem 4.1. Let B # S. Then, hB (# 1 if and only if B is consistently
ordered.
In some sense hB (ln #) measures the departure of the matrix B from having block
Property A. For matrices that are not consistently ordered, the following result applies
[23, Thm. 4.6].
Theorem 4.2. If B # S, then either hB (# 1 for all real #, or hB (#) is strictly
increasing for # 0. Moreover, for any #= 0,
Figure
4.2(c) demonstrates how close the function hB is to 1 for the reduced
matrix when 1D partitioning is used and provides another way to illustrate the near-
Property A of the matrix. In the figure, the function hB is computed for a symmetrized
block Jacobi 256 - 256 matrix, where
We can now analyze the Gauss-Seidel and SOR schemes. Recall [23, Thm. 4.8]
(slightly modified), as follows.
Theorem 4.3. Let LB,# denote the SOR iteration matrix. If B # S then the
Gauss-Seidel iteration matrix, which corresponds to the case
with equality possible only if B is consistently ordered.
This is a sharpened form of the Stein-Rosenberg theorem [23]. Applying this
theorem to our reduced matrix, we have the following theorem.
Theorem 4.4. If the bound for the block Jacobi iteration matrix tends to the
actual spectral radius as h # 0, then the spectral radius of the block Gauss-Seidel
iteration matrix coincides with the square of the bound for the spectral radius of the
block Jacobi iteration matrix up to O(h 2 ) terms.
Proof. Since the iteration matrix B has the same spectral radius as D -1 C, where
D-C, we can use the bound for the 1D iteration matrix, which was presented
in Theorem 3.6. For simplicity of notation, denote this bound by #. Clearly, since
1052 CHEN GREIF AND JAMES VARAH
20.10.30.50.70.9Fig. 4.3. Spectral radius of the SOR iteration matrix versus the relaxation parameter. The
uppermost curve corresponds to 1D splitting for the unreduced system, and then we have, in order,
2D splitting for the unreduced system, 1D splitting for the reduced system, and 2D splitting for the
reduced system.
Since # has a Taylor expansion of the form 1 - ch 2
that #
2-# and # 2 have the same Taylor expansion up to O(h 2 ) terms, of the form
in terms of the PDE coe#cients,
and the same for # 2 . It has been shown that the bound for #(B) is extremely tight as
so we can replace the spectral radii by the bounds for the spectral radii
in Theorem 4.3 to obtain the desired result.
The actual meaning of this result is that for systems of equations that are large
enough, the matrix nearly has Property A relative to 1D partitioning, at least as far
as the convergence properties of the block Gauss-Seidel scheme are concerned. Since
the solution process for small mesh Reynolds numbers is more e#cient for the 1D
splitting, compared to the 2D splitting, as we shall see in section 5, it was our aim to
overcome the di#culty of not being able to apply Young's analysis directly.
For the block SOR scheme, the upper bound for the spectral radius is given in
[23, Thm. 4.9] as # - 1 and is not tight. However, it is numerically evident that
the bound for the Jacobi iteration matrix can be e#ectively used to estimate the
optimal SOR parameter. In Fig. 4.3 we can observe that the behavior for the 1D
splitting is qualitatively identical to the behavior of two-cyclic consistently ordered
matrices. Here we present results for centered di#erence discretization of the problem
with 0.5. The reduced matrix is 256 - 256. In the figure we also present
the behavior of the SOR iteration matrix of the unreduced system.
5. Computational work and numerical experiments. Having done some
analysis, in this section we examine which of the 1D and 2D solvers is more e#cient
overall and show that the reduced system is superior to the unreduced system.
5.1. Aspects of computational work. If be, cd, fg > 0, then by [23, Thm.
3.15] or by (3.6) and (3.7), it is evident that the spectral radius of the iteration matrix
associated with the 2D splitting is smaller than that of the 1D iteration matrix.
However, inverting D 1 involves less computational work than inverting D 2 . We now
compare these two solution procedures.
We begin with the block Jacobi scheme. Asymptotically, there is a fixed ratio
of 1.8 between the rate of convergence of the two splittings (see (3.6) and (3.7)). In
rough terms, this number characterizes the ratio between number of iterations until
convergence for the two solvers.
As far as the computational work per iteration is concerned, if D
are the LU decompositions of the matrices of the systems that are to be
solved in each iteration, we can assume that the number of operations per iteration is
approximately the number of nonzeros in L i +U i plus the number of nonzeros in the
other part of the splitting. In order to avoid costly fill-in using Gaussian elimination
for (whose band is sparse), we use instead a technique of inner-outer iterations.
denote the number of iterations for the schemes associated with
the 1D splitting and the 2D splitting, respectively. Let us also define cost functions
as follows: c 1 (n) and c 2 (n) represent the overall number of floating point operations
for each of the solvers, and c in (n) represents the cost of the inner solve. Then
(5.1a)
The term nz(X) stands for the number of nonzeros of a matrix X, and S stands for
the reduced matrix.
Proposition 5.1. For n large enough, the scheme associated with the 2D splitting
is cheaper than the one associated with the 1D splitting only if c in (n) < 15n 3 .
Proof. If n is large enough we can use the relation k1
refer only to the
leading power of n in the expressions for c 1 (n) and c 2 (n). So doing, it follows that
c in (n)
and the result stated in the proposition readily follows.
What is left now is to examine the amount of work involved in solving the inner
system of equations. A natural choice of a splitting for this system is D
It is straightforward to show the following by Propositions 3.2 and 3.3.
Proposition 5.2. If block Jacobi based on the splitting D
is used, then the spectral radius of the inner iteration matrix, namely, I -D
bounded by #
are defined in (3.3) and (3.4).
For considering methods that are faster than block Jacobi for the inner system,
we have the following useful result.
Proposition 5.3. The inner matrix is block consistently ordered relative to 1D
partitioning.
Proof. The inner matrix is block tridiagonal relative to this partitioning.
We are now ready to prove the main result of this subsection.
Proposition 5.4. If be, cd, fg > 0, then if 1D splitting is used in solving the
inner system, the cost of solving it is higher than 15n 3 floating point operations, for
block Jacobi as well as block Gauss-Seidel and block SOR, and thus, for n large enough
and the methods considered in this paper, the 1D solver is faster than the 2D solver.
1054 CHEN GREIF AND JAMES VARAH
Proof. The Taylor expansion of the bound in Proposition 5.2 is
For h small enough, we can simply examine the leading term: The bound is approximately9 if block Jacobi is used, and since by Proposition 5.3 the matrix is consistently
ordered, Young's analysis shows that the spectral radius is approximately
81 if block
Gauss-Seidel is used and approximately 0.055 if block SOR with the optimal relaxation
parameter is used. For both of these schemes each iteration costs about 7n 3
floating point operations. Since reducing the inital error by a factor of 10 m takes
roughly is the spectral radius of the associated iteration
matrix, it follows that even for the block SOR scheme with the optimal relaxation
parameter, which is the fastest scheme considered here, after two iterations the error
is reduced only by a factor of approximately 10 2.5 , which is obviously far from satis-
factory. Thus the iteration count is larger than 2, and the cost of inner solve is larger
than 15n 3 floating point operations.
We remark that an inexact inner solve can also be considered (see, for example,
Elman and Golub's paper on inexact Uzawa algorithms [11]), but this is beyond the
scope of this work.
It is our conclusion that the solver associated with 1D splitting is more e#cient
than the one associated with the 2D splitting if upwind di#erences are used or if
centered di#erences with mesh Reynolds numbers smaller than 1 in magnitude are
used.
5.2. Comparison with the unreduced system. One step of cyclic reduction
results in a more complicated di#erence operator compared to the original, unreduced
system, and a grid which is more di#cult to handle as far as ordering of the unknowns
is concerned. Moreover, the unreduced matrix is block consistently ordered relative
to both 1D and 2D splittings (we refer to the straightforward one-line and one-plane
partitionings as the basis for 1D and 2D splittings in case of the unreduced system)
and thus Young's analysis can be easily applied. One could ask, therefore, what the
advantages of using cyclic reduction are. In this subsection we illustrate the superiority
of the reduced system over the unreduced system.
We start with the block Jacobi scheme. For the unreduced system we shall refer
to natural lexicographic ordering of the unknowns, so that the lines of gridpoints are
x-oriented and the planes are x-y oriented. We start with quoting the following result,
given in [14, Sects. 2 and 4].
Lemma 5.5. The spectral radius of the block Jacobi scheme associated with the
1D splitting for the unreduced system is
The Taylor expansion of (5.4) about
In [14] we have shown that the spectrum of the iteration matrix of the unreduced
system can be found by a sequence of diagonalizations and permutations that form
a similarity transformation of the matrix into a matrix whose associated iteration
matrix is easy to analyze, as far as its spectrum is concerned. The reader is referred
to the proof of [14, Thm. 2.1] for full details. For the 2D splitting a similar procedure
can be applied. The technique we have used is similar to the one presented in [14] and
the algebraic details are omitted.
Lemma 5.6. The spectral radius of the block Jacobi iteration matrix associated
with 2D splitting is given by
and its Taylor expansion about
The same type of analysis that has been done in the previous section, comparing the
1D splitting to the 2D splitting for the reduced system, is possible for the unreduced
system. Below we sketch the main details: Suppose inner-outer iterations are used in
solving the scheme associated with the 2D splitting. Denote, again, this splitting for
the inner system as D are now di#erent than the ones
defined in section 3). Then we have the following proposition.
Proposition 5.7. Consider the unreduced system. Suppose be, cd, fg > 0, n is
su#ciently large, and 1D splitting is used in solving the inner system. Then for the
stationary methods considered in this paper, the 1D solver is faster than the 2D solver.
Proof. The ratio between the asymptotic rate of convergence between the 1D solver
and the 2D solver is 2. The number of nonzeros of the whole matrix is approximately
7n 3 , the number of nonzeros of D 1 is approximately 3n 3 , and the number of nonzeros
of D 2 is approximately 5n 3 . Since the spectral radii for the two splittings are available,
we can find the spectral radius for the iteration matrix of the inner system. Its Taylor
expansion is given by 1
cost functions
analogous to the ones defined in section 3 for the reduced system, and using the same
line of argument, we have
c in (n)
and from this it follows that only if c in (n) < 12n 3 the 2D solver is more e#cient.
However, as in Proposition 5.4, this means at most two iterations of the inner solve
can be performed, which is not enough for the required accuracy.
Since the 1D splitting for both the reduced and the unreduced systems gives rise
to a more e#cient solve, we compare these two systems, focusing on this splitting.
See also [14, Sect. 4]. The LU decomposition for the solution of the system in each
iteration is done once and for all (see [12] for operation count) and its cost is negligible
in comparison with the amount of work done in the iterative process.
Each iteration in the reduced system costs about 10n 3 floating point operations,
whereas each iterate for the unreduced system costs approximately 7n 3 floating point
operations per iteration. Hence, the amount of computational work per iteration is
cheaper for the unreduced system by a factor of about 10/7. However, using the
asymptotic formulas (3.6) and (5.5), it is evident that the number of iterations required
for the unreduced system is larger than that required for the reduced system, and in
(a) # gs - centered (b) # gs - upwind
Fig. 5.1. Comparison between the spectral radii of the Gauss-Seidel iteration matrices of the
reduced and unreduced systems. The uppermost curve corresponds to 1D splitting for the unreduced
system, and then we have, in order, 2D splitting for the unreduced system, 1D splitting for the
reduced system, and 2D splitting for the reduced system.
the worst case, the ratio between the work required for solving the reduced system
versus the unreduced system is roughly (10/7) - (27/40), which is 27/28 and is still
smaller than 1, thus the reduced solver is more e#cient. If the convective terms are
nonzero, then this ratio becomes smaller, and in practice we have observed substantial
savings, as is illustrated in the test problem discussed in section 5.3.
Moving from comparing the block Jacobi scheme for both the reduced and the
unreduced systems to comparing Gauss-Seidel and SOR is straightforward if Young's
analysis can be used. In section 4 we showed that even though the reduced matrix is
not consistently ordered relative to 1D partitioning, it is nearly consistently ordered.
In general, convergence analysis for the Jacobi scheme does not always indicate the behavior
of the Gauss-Seidel and SOR schemes. Nevertheless, for two-cyclic consistently
ordered matrices (or matrices that are nearly so) the strong connections between the
spectra of the Jacobi iteration matrix and the Gauss-Seidel and SOR iteration matrices
[24] allow us to conclude that once the superiority of the reduced system over the
unreduced system has been shown for Jacobi, this superiority is carried over to the
other stationary schemes. Indeed, our numerical experiments verify this observation,
as is illustrated in section 5.3.
In Fig. 5.1 the superiority of the reduced system over the unreduced system for the
Gauss-Seidel scheme is illustrated numerically. The graphs were created for a small
512-point grid. It is interesting to notice that the reduced 1D Gauss-Seidel iteration
matrix is well behaved (i.e., its spectral radius is significantly smaller than 1), even
for the convection-dominated case, when centered di#erences are used. Convergence
does not occur when the block Jacobi scheme with the same values of mesh Reynolds
numbers is used. We have no bounds on convergence rates for this range of mesh
Reynolds numbers and thus cannot explain this phenomenon analytically.
The superiority of the reduced system is evident also for the SOR scheme (see Fig.
4.3). Notice that for the SOR scheme it is di#cult to determine the optimal relaxation
parameter when be, cd, and fg are negative.
We end this subsection with a remark regarding the case of convection-dominated
equations. Our convergence analysis does not cover the case of mesh Reynolds num-
Table
Comparison between iteration counts for the reduced and unreduced system, for di#erent values
of mesh Reynolds numbers. N/C marks no convergence after 2,000 iterations.
System Reduced Unreduced
centered 393 173 53 N/C 1030 444 N/C N/C
GS centered 188 77 14 322 492 198 N/C N/C
centered 36
GS upwind 219
bers that are greater than 1 in magnitude in conjunction with centered di#erence
discretization. Since the numerical solution might be oscillatory when a centered difference
scheme is used [18], analysis for this case is of less interest. Nevertheless,
Fourier analysis based on Chan and Elman's technique [5], which shows that when
one of the mesh Reynolds numbers tends to # the scheme still converges, is presented
in [13].
5.3. Test problem. Consider (1.1), where the right-hand side is such that the
solution for the continuous problem is u(x, y, sin(#z) and the
domain is the unit cube. The Dirichlet boundary conditions in this case are zero. The
performance of the solvers for this specific problem well represents the performance
for other test problems that we have examined.
We have taken the zero vector as our initial guess and have used ||r i
as a stopping criterion (here r i denotes the residual at the ith iterate). The
program stopped if the stopping criterion was not satisfied after 2,000 iterations. Our
numerical experiments were executed on an SGI Origin 2000, which has four parallel
195 MHZ processors, 512 MB RAM, and 4MB cache. The program was written in
MATLAB 5.
In the experiments that are presented, the 1D solver is used. In Table 5.1, the
grid is of size 32. The matrix of the underlying system of equations is of
size 32, 768 - 32, 768. In the table, iteration counts for the Jacobi scheme and the
Gauss-Seidel scheme are presented for four values of the PDE coe#cients and for two
discretization schemes.
The PDE coe#cients referred to in Table 5.1 are specified in (1.1). For the values
of these coe#cients in the table, namely 10, 20, 100, and 1,000, the corresponding
values of the mesh Reynolds numbers are 0.1515, 0.3030, 1.515, and 15.15. Notice that
the last two are larger than 1, and so for these values we have no analytical way of
knowing the optimal relaxation parameter and the experiments for these values were
not performed.
The following observations can be made.
1. Overall the reduced solver is substantially faster than the unreduced solver.
There are cases where the reduced solver converges whereas the unreduced
solver does not. We remark that in all cases that were examined, the CPU
time for the reduced solver was less (much less in most cases) than the CPU
time for the unreduced system.
2. For convergence is faster than for This illustrates a phe-
nomenon, which is supported by the analysis and holds also for the two-dimensional
case [9], that for small-enough mesh Reynolds numbers, the
1058 CHEN GREIF AND JAMES VARAH
"more nonsymmetric" systems converge faster than the "close to symmet-
ric" ones (close in the sense of PDE coe#cients close to zero).
3. The upwind di#erence scheme converges more slowly than the centered di#er-
ence scheme when the mesh Reynolds numbers are small in magnitude, but
convergence is extremely fast for large mesh Reynolds numbers. This applies
to both the reduced and the unreduced systems and follows from the fact that
as the PDE coe#cients grow larger, the underlying matrix is more diagonally
dominant when upwind schemes are used.
6. Concluding remarks. We have presented ordering strategies for a cyclically
reduced matrix arising from discretizing a 3D model problem with constant coe#-
cients. We have derived bounds on convergence rates for block stationary schemes
associated with what we called 1D splitting or 2D splitting. We have compared the
amount of work involved in solving the system with the suggested splittings. In gen-
eral, the 1D splitting gives rise to more-e#cient solvers. Since the matrices associated
with this splitting are not consistently ordered, we have analyzed their departure from
block Property A and have shown that, in fact, these matrices are nearly block consistently
ordered. We have shown, both analytically and numerically, that one step of
cyclic reduction results in a system which is easier to solve, compared to the original,
unreduced system.
Acknowledgments
. We would like to thank the referees for their helpful com-
ments, which substantially improved this manuscript.
--R
On the use of preconditioned conjugate gradient methods for red-black order five-point di#erence schemes
A compact non-iterative Poisson solver
The direct solution of the discrete Poisson equation on irregular regions
On direct methods for solving Poisson's equations
Fourier analysis of iterative methods for elliptic problems
Use of fast direct methods for the e
Point cyclic reductions for elliptic boundary-value problems I: The constant coefficient case
Iterative methods for cyclically reduced non-self-adjoint linear systems
Iterative methods for cyclically reduced non-self-adjoint linear systems II
Line iterative methods for cyclically reduced discrete convection-di#usion problems
Inexact and preconditioned Uzawa algorithms for saddle point problems
Matrix Computations
Analysis of Cyclic Reduction for the Numerical Solution of Three-Dimensional Convection-Di#usion Equations
Iterative solution of cyclically reduced systems arising from discretization of the three-dimensional convection di#usion equation
On the equivalence of certain iterative acceleration methods
Block iterative methods for cyclically reduced matrix equa- tions
A fast direct solution of Poisson's equation using Fourier analysis
Numerical Solution of Convection-Di#usion Problems
Iterative Methods for Sparse Linear Systems
A generalized cyclic reduction algorithm
A cyclic reduction algorithm for solving block tridiagonal systems of arbitrary dimension
A generalization of the Young-Frankel successive over-relaxation scheme
Matrix Iterative Analysis
Iterative Solution of Large Linear Systems
--TR
--CTR
M. Cheung , Michael K. Ng, Block-circulant preconditioners for systems arising from discretization of the three-dimensional convection-diffusion equation, Journal of Computational and Applied Mathematics, v.140 n.1-2, p.143-158, 1 March 2002
Liang Li , Ting-Zhu Huang , Xing-Ping Liu, Asymmetric Hermitian and skew-Hermitian splitting methods for positive definite linear systems, Computers & Mathematics with Applications, v.54 n.1, p.147-159, July, 2007 | convection-diffusion;cyclic reduction;stationary methods;three-dimensional problems |
329333 | Efficient support for interactive scanning operations in MPEG-based video-on-demand systems. | In this paper, we present an efficient approachfor supporting fast-scanning (FS) operations in MPEG-based video-on-demand (VOD) systems. This approach is based onstoring multiple, differently encoded versions of the samemovie at the server. A <i>normal version</i> is used for normalplayback, while several <i>scan versions</i> are used for FS. Eachscan version supports forward and backward FS at a givenspeedup. The server responds to an FS request by switching from the normal version to an appropriate scan version.Scanning versions are produced by encoding a sample of theraw frames using the same GOP pattern of the normal version. When a scanning version is decoded and played back atthe normal frame rate, it gives a perceptual motion speedup.By being able to control the traffic envelopes of the scanversions, our approach can be integrated into a previouslyproposed framework for distributing archived, MPEG-coded video streams. FS operations are supported using no or littleextra network bandwidth beyond what is already allocatedfor normal playback. Mechanisms for controlling the trafficenvelopes of the scan versions are presented. The actionstaken by the server and the client's decoder in response tovarious types of interactive requests are described in detail.The latency incurred in implementing various interactive requests is shown to be within an acceptable range. Stripingand disk-scheduling strategies for storing various versions atthe server are presented. Issues related to the implementationof our approach are discussed. | Introduction
The maturing of video compression technologies, magnetic storage subsystems, and broadband networking
has made video-on-demand (VOD) over computer networks more viable than ever. Major
carriers have tested small-scale VOD systems, and companies that provide related services and products
are emerging (cf. [18, 6]). To improve the marketability of VOD services and accelerate their
wide-scale deployment, these services must support user interactivity at affordable cost. At mini-
mum, an interactive VOD service must allow users to dynamically request basic VCR operations,
Part of this paper was presented at the ACM/SPIE Multimedia Computing and Networking Conference, 1998.
y Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ 85718. Tel. (520) 621-
8731. krunz@ece.arizona.edu. This work was partially supported by an NSF CAREER Award ANI-9733143 and
partially by a University of Arizona Faculty-Grant Award.
z Department of Computer Science, University of Maryland, College Park, MD 20742.
such as stop-resume, pause-resume, slow motion, jump (forward or backward), and fast scanning
(i.e., viewing the movie forward or backward at multiple times the normal playback rate).
The difficulty of supporting interactivity in a VOD system varies from one interactive function
to another. A stop, jump, or pause followed by resume are relatively easy to support, as they do not
require more bandwidth than what is required for normal playback. On the other hand, fast-scanning
involves displaying frames at several times the normal rate. Transporting and decoding frames
at multiple times the normal frame rate is prohibitively expensive and is infeasible with today's
hardware decoders. Backward FS is even more difficult to support in compression schemes that
involve motion interpolated frames, such as B frames in the MPEG scheme [17]. In the case of
MPEG, all the reference frames in a Group of Pictures (GOP) must be decoded before B frames of
that GOP can be played back in the reverse order.
Several approaches have been proposed to support interactivity in a VOD system. In [26] interactive
operations, including scanning, are implemented at the client side using prefetched frames.
The attractiveness of this approach lies in its transparency to both the network and the server. How-
ever, if a scanning operation lasts for an extended period of time, a significant portion of the movie
must be prefetched. In addition to the large buffer requirement, excessive prefetching necessitates
requesting the movie long before its commencement. Scanning operations can also be supported by
transmitting frames at multiple times the normal frame rate over a communications channel that is
different from the one used for normal playback [12, 28]. Since at a given point in time only a small
percentage of users are in the interactive mode, the "interactive channel" can be shared by several
users. However, in this case there is a small probability that a request for a FS operation will be
rejected (i.e., FS operations are guaranteed on a statistical basis). During the scanning operation,
video frames must be decoded at multiple times the normal decoding rate.
Interactivity has also been addressed in the context of batching [2, 1, 3, 11, 15, 24, 25, 34]. As an
example, in [2] the authors assume that the VOD server operates in a multicast environment, whereby
multiple instances of each movie are being simultaneously distributed. However, these instants have
different logical times. VCR operations are implemented by moving the user to a multicast group
with an appropriate logical time. Since the number of the different instances of the same movie is
limited, this scheme can only support "discontinuous" VCR functions. The set-top buffer needed to
support FS operations can be excessive if the number of multiple instances is small. Furthermore,
the decoder is still required to process frames at multiple times the normal frame rate to achieve a
FS effect.
FS functions can also be supported by dropping parts of the original compressed video stream
[9, 31, 7, 27]. Dropping aims at reducing both the transport and the decoding requirements of
FS without causing significant degradation in video quality. In MPEG-2, dropping is facilitated
by various modes of scalability (spatial, temporal, and SNR) [17]. Spatial scalability, for example,
provides the means to drop the less important data (the enhancement layer) and maintain the
essential data (the base layer). Typically, dropping is performed after compression, so it must be
done selectively to ensure that the dropped data will not result in significant degradation in video
quality. For example, if whole MPEG frames are to be dropped, then dropping must take into
account the dependency structure of the MPEG sequence. One possibility is to drop all B frames
of an MPEG stream and transmit anchor frames (I and P) [7]. When the transmitted frames are
played back at the normal playback rate, they give the visual perception of a FS. Another alternative
is to drop MPEG frames on a per-GOP basis [9]. Since a GOP interval corresponds to about half
of a second, this approach introduces discontinuities in the movie. A third approach is to skip a
trailing part of each GOP and transmit the first few frames of the GOP [27] (the transmitted frames
must be chosen in such a manner that they can be decoded independently of the skipped frames).
The transmitted frames are then played back at the normal frame rate. A good discussion of these
and other MPEG-related techniques is given in [31]. Instead of dropping frames after compression,
some researchers suggested supporting FS operations using separate copies of the movie that are
encoded at lower quality than the quality of the normal playback copy [30]. These "scan" copies
include only I and P frames (i.e., B frames are not used). This makes it easier to provide backward
fast-scan. The motion vectors of the predicted frames are encoded in such a manner so as to reduce
the artifacts when playing frames in the reverse order.
In this work, we introduce an efficient approach for supporting forward fast-scanning (FFS) and
backward fast-scanning (BFS) in a VOD system. Similar to [30], our approach is based on encoding
separate copies of the movie to be used for FS operations, with each copy being generated by skipping
raw video frames before compression. We refer to these versions as the scan versions, and to the
one used for normal playback as the normal version. Each scan version is used to provide both
BFS and FFS at a given speedup. Scan versions are encoded in a way such that when played back
at the normal frame rate, they give the perception of a faster video in the forward or backward
direction. In contrast to the approach in [30], the scan versions are encoded using the same GOP
of the normal version (B frames are included). The encoding of a scan version is performed in
a manner that enforces a particular time-varying traffic envelope for that version. This form of
rate-controlled compression results in variable picture quality during FS. By making the envelopes
of the scan versions identical or sufficiently close to the envelope of the normal version, FS can be
integrated into a previously proposed framework for the distribution of archived, MPEG-coded video
streams [21, 22]. By generating scan versions that exhibit similar envelopes to the normal version,
FS operations can be made transparent to the underlying network, and they can be supported with
little or no extra bandwidth and at the same decoding rate of normal playback.
The paper is organized as follows. In Section 2 we briefly describe our previously proposed frame-work
for video distribution based on time-varying traffic envelopes. In Section 3 the preprocessing
steps required to support FS operations are presented. Section 4 provides detailed description of
how FS-related interactivity is supported. Signalling between the client and the server is discussed
in Section 5. In Section 6 we discuss disk scheduling that is needed for our proposed FS approach.
Implementation issues are briefly discussed in Section 7. In Section 8 we compare our scheme against
other FS schemes. Finally, Section 9 summarizes the paper and points to open research issues.
Envelope-Based Video Scheduling and Multiplexing
In this section, we give an overview of our previously proposed framework for the distribution of
MPEG-coded video streams. Details can be found in [21, 22, 33]. In this framework, a video
distribution network consists of several fixed-capacity dedicated "bandwidth pipes" that extend from
the server to remote head-end (HE) switches over a public network (Figure 1). These bandwidth pipes
can be, for example, ATM virtual paths (VPs) onto which several video connections are multiplexed.
Clients request videos on demand by sending their requests to the server via one of the HE switches.
Video streams are transported at a constant frame rate, but with per-stream bandwidth that is
video streams
video streams Switch
Video Server
Clients
Clients
Head-End Switch
Public Network
Figure
1: Video distribution network with two HE switches.
significantly less than the source peak rate. Since the frame transmission rate is the same as the
playback rate, prefetching is not needed at the client set-top box. Bandwidth gain is achieved through
statistical multiplexing of MPEG streams that are described by deterministic, time-dependent traffic
envelopes. An envelope here constitutes a time-varying upper bound on the bit rate. It is intended
to capture the periodic structure of an MPEG stream (in terms of the repetition of the GOPs). The
simplest form of our traffic envelope is called the global envelope, and is described as follows: For
the ith MPEG stream, s i , the global envelope is a periodic function (in time) that is parameterized
by the 5-tuple
I (i)
, where I (i)
max is the largest frame of s i (typically, an
I frame), P (i)
max is the largest P or B frame of s i (typically, a P frame), and B (i)
max is the largest B
frame of s i . By construction, I (i)
. The remaining two parameters characterize the
GOP pattern of the ith stream: N (i) is the length of a GOP (I-to-I frame distance) and M (i) is the
P-to-P frame distance. An example of the global envelope is shown in Figure 2.
Based on the global-envelope model, MPEG streams can be appropriately scheduled for multi-
Time (in frame periods)
Bit Rate
global traffic envelope
window-based envelope
Figure
2: Example of global and window-based traffic envelopes (N
plexing at the server. Consider n MPEG video streams, s are destined to the same HE
switch. Let b i (t) be the traffic envelope of s i , whose starting time is given by t i . Let e
N be the least
common multiple of fN (1) ; N g. We define the phase of s i by u
N , with u 1= 0.
In the special case when N describes the frame lag of a GOP of s i relative to the
closest GOP of s 1 . The temporal relationships between the GOPs of the n streams are completely
specified by which is referred to as the arrangement. Let b tot (t) be the traffic
envelope resulting from the superposition of the n streams; b tot
is periodic with period e
N . The peak rate of b tot (t) is given by
b tot
By allocating nC(u; n) of bandwidth to the aggregate traffic, each stream is guaranteed a constant-
frame-rate delivery on an end-to-end basis. For most values of u, C(u; n) (which is referred to as the
per-stream allocated bandwidth (PSAB)) is smaller than the source peak rate. The allocated band-width
nC(u; n) is updated dynamically upon the addition of a new video stream or the termination
of an ongoing one. Stream scheduling is performed only for new video requests, and is done at the
expense of delaying the service of a new request by no more than e
frame periods.
An optimal scheduling policy is one that produces the best arrangement, u , where
u2U
n) (2)
and U is the set of all possible arrangements of n streams. In [21] optimal and suboptimal scheduling
policies were proposed for homogeneous and heterogeneous multiplexed streams that are characterized
by global envelopes. These policies resulted in PSAB of about 40-60% of the source peak
rate (the actual value depends on the envelope). Bandwidth gain can be further improved using
window-based traffic envelopes [33]. In this case, a pre-recorded MPEG sequence is divided into
several segments that have the same number of frames. The time it takes to transmit one segment
is called a window. As in the global-envelope model, five parameters are used to characterize the
window-based envelope. However, in this case the values of the first three parameters (i.e., the
maximum frame sizes) are computed for each segment of the movie (see Figure 2). Several efficient
scheduling schemes were devised under window-based envelopes. Clearly, the smaller the window
size the smaller the amount of allocated bandwidth, but the higher the computational complexity
of updating the allocated bandwidth. For reasonable window sizes, the PSAB is about 15-30% of
the source peak rate (depending on the envelope parameters and the window size). This bandwidth
gain is comparable to the one achieved through video smoothing (e.g., [29, 26, 23, 13, 16]), with the
advantages of (1) not requiring any buffer in the set-top box, (2) not depending on network delay
variation, and (3) having a very small startup delay.
The framework in [21, 22] was originally designed for playback-only VOD, so it did not support
client interactivity. Interactive operations other than FS can be easily integrated into that framework,
and therefore will not be addressed further. In this paper, we focus on FS interactive operations and
their integration into our VOD framework.
3 Preprocessing of Video Movies
3.1 Scan Versions
To support FS operations, the server maintains multiple, differently encoded versions of each movie.
One version, which is referred to as the normal version, is used for normal-speed playback. The other
versions, which are referred to as the scan versions, are used for fast-scanning. Each scan version
is used to support both FFS and BFS at a given speedup. The server switches between the various
versions depending on the requested interactive operation (only one version is transmitted at a given
instant of time). For a given speedup factor s (s - 2), the corresponding scan version is obtained by
encoding a subset of the raw (i.e., uncompressed) frames of the original movie at a sampling rate of
1-to-s. We refer to this sampling rate as the skip factor. Scan versions are encoded using the same
GOP pattern of the normal version, and are transported at the normal frame rate. As a result, it is
easy to show that every raw frame that is encoded as an I frame in a scan version is also encoded as
an I frame in the normal version (i.e., I frames of the scan versions constitute a subset of I frames
of the normal version).
Accordingly, I max in the global envelope of a scan version is less than or equal to I max in the
global envelope of the normal version. This is not the case for P max and B max . Both P and B
frame types involve motion compensation (prediction or interpolation), which exploits the similarities
between consecutive frames to reduce the frame size. Frame skipping increases the differences between
successive images, resulting in larger P and B frames. The impact of frame skipping on the maximum
and average frame sizes is illustrated in Figure 3 for the Race clip. Before skipping, this clip consists of
1000 frames with frame dimensions of 320\Theta240 pixels. For each skip factor, encoding was performed
using an MPEG-2 software encoder with 3. The quantization values were set to 8,
10, and 25, for I, P, and B frames, respectively. Part (b) of the figure shows that the average size of
I frames is almost unaffected by frame skipping. In contrast, the average sizes of P and B frames
tend to increase with the skip factor.
I frames
frames
Skip Factor
Normalized
Frame
Size
Race Trace
(a) Maximum frame sizes.
frames
frames
Normalized
Frame
Size
Race Trace
(b) Average frame sizes.
Figure
3: Frame sizes of a scan version versus the skip factor (values are normalized with respect to
their counterparts in the normal version).
3.2 Controlling the Envelopes of the Scan Versions
As indicated in Figure 3(a), encoding a sample of the raw frames may result in higher values of
To generate scan versions with comparable envelopes to that of the normal version,
the encoding of P and B frames of a scan version must be rate controlled. A common approach to
control the size of an MPEG frame is to vary the quantization factor on a per-frame basis. This
results in variable video quality during FS operations (however, the quality is still constant during
normal playback).
Without loss of generality, we consider the case when the envelopes are global. Extension to
window-based envelopes is straightforward. To bound the sizes of P and B frames of a scan version,
the encoder uses two predefined upper thresholds T (u)
are for the normal version, and S (u)
P and S (u)
are nonnegative constants. A P
frame in a scan version is encoded such that its size is no greater than T (u)
or S (u)
B is positive, the envelope of a scan version is allowed to exceed the envelope of the normal
version by no more than a fixed amount. In the case of window-based envelopes, T (u)
from one window to another, depending on the variations in P
After a raw frame of a scan version has been encoded as a P or a B frame, the encoding algorithm
checks whether the size of the compressed frame is below the associated upper threshold. If it is not,
then the quantization factor for the corresponding frame type is increased by one and the raw frame
is re-encoded. This procedure is repeated until the size of the compressed frame is smaller than the
corresponding upper threshold.
Two different approaches can be used to initialize the quantization value when a new P or B
frame is to be encoded. In the first approach (or algorithm), when a frame is to be encoded for the
first time, the encoder starts with the last quantization value that was used in the encoding of the
previous frame of the same type. The main problem with this approach is that the quantization
value might be kept unnecessarily high following the encoding of a very large frame, resulting in an
unnecessarily low quality during FS. While it is important to produce scan versions with comparable
envelopes to that of the normal version, there is little incentive in reducing these envelopes below
the envelope of the normal version.
In the second approach, the encoding algorithm tries to track the nominal quantization value,
which was used in encoding the same type of frame in the normal version. Consider the encoding
of a P frame (similar discussion applies to B frames). In the first encoding attempt, the encoder
checks the final quantization value that was used to encode the previous P frame. If that value is
equal to the nominal quantization value for P frames, then it is taken as the initial quantization
value for the current frame. If on the other hand, the last quantization value of the previous P frame
is larger than the nominal value, then the quantization value for the current frame is initialized to
the last quantization value minus one. After the first encoding attempt, if the resulting frame size is
within the upper bound, the encoder proceeds to the next frame. Otherwise, the quantization value
is incremented and the same raw frame is re-encoded, as in the first approach. The advantage of
the second approach is that it tries to produce a FS effect with the same constant quality of normal
playback, but when this is not possible it minimizes the fluctuation in video quality during FS.
Figure
4 depicts the variations in the quantization values for P and B frames when S (u)
0:05 and In these experiments, the nominal quantization values for P and B frames are 10
and 25, respectively. Note that the quantization factors for type-P and type-B frames are plotted
versus the index of every frame in the scan version (including the indices of I frames).
In the second encoding approach, video quality during FS varies smoothly around the nominal
quality at the expense of an increase in the number of encoding attempts. Since encoding in VOD
is done off-line, the encoding time may be less of an issue than video quality.
st Algorithm
2nd Algorithm
Quantization
Factor
Race Trace
nominal value
(a) P frames.
1st Algorithm
2nd Algorithm
2005152535Quantization
Factor
Race Trace
nominal value
frames.
Figure
4: Variations in the quantization values during the encoding of a scan version
The two approaches can be contrasted with respect to video quality using the peak signal-to-noise
ratio (PSNR). We use the PSNR of the Y-component of a decoded frame. The PSNR is obtained
by comparing the original raw frame with its decoded version with encoding being done using one
of the two algorithms. Figure 5 depicts the resulting PSNR values for Race movie with
0:05. Both approaches achieve acceptable quality (PSNR is sufficiently large). The
average PSNR value for the 200 frames is 36.9 dB for the first algorithm and 37.5 dB for the second,
i.e., the average quality is slightly better under the second algorithm. The absolute values of the
PSNR do not convey the advantage of the second encoding approach. For this purpose, we compute
the PSNR values for the 200 frames when encoding is done without any constraints (i.e., no upper
bounds are imposed), and use these values as a reference. For each frame, we compute the difference
between its reference PSNR value and the PSNR value resulting from each of the two rate-control
encoding algorithms. These differences are plotted in Figure 6 for a segment of the scan version. In
this figure, a large value indicates a large deviation from the reference PSNR, and thus lower quality.
Clearly, the second algorithm achieves better quality than the first approach, but at the expense of
more encoding attempts.
3.3 Storage Overhead
Generating separate copies for FS operations comes at the expense of extra storage at the server. To
evaluate the storage overhead, we take into account the following considerations: (1) BFS and FFS
with the same speedup can be supported using one scan version, (2) I frames of the scan versions need
not be stored, since they are part of I frames of the normal version, and (3) the number of encoded
frames in a scan version is inversely proportional to the skip factor. Given these considerations, the
storage overhead of the scan versions can be computed as follows. Without loss of generality, we
st Algorithm
2nd Algorithm
90 100 110 120 130 140 150 160 170 180 190515253545Race Trace
Figure
5: PSNR for encoded frames in a scan
version.
1st Algorithm
2nd Algorithm
90 100 110 120 130 140 150 160 170 180 1900.51.52.53.54.5Race Trace
Figure
Difference in PSNR between constrained
and unconstrained encoding.
consider the case of global traffic envelopes. Let f be the number of frames in the normal version.
Of these frames, f=N are I frames, f(1=M \Gamma 1=N) are P frames, and f(1 \Gamma 1=M) are B frames. The
storage requirement of the normal version is given by
where I avg ; P avg and B avg are the average frame sizes of I, P, and B frames in the normal version.
Let P avg be the average sizes of P and B frames in a scan version with skip factor s.
Then, the storage requirement of this scan version is given by
W scan
For n scan versions with skip factors s the relative increase in the storage requirement is
given by
Numerical examples that show the relative increase in the storage requirement are given in
Figures
7 and 8. for the Race clip. Figure 7 depicts the relative storage overhead of a scan version
as a function of s under different upper thresholds 3). The upper threshold has a
negligible impact on the storage overhead. For s - 4, the storage overhead of a scan version is no
more than 25% of the storage requirement of the normal version. Figure 8 shows the increase in
storage as a function of the GOP length (N) with
The storage overhead
increases slowly with N .
Skip Factor
Storage
Increase
Figure
7: Relative increase in storage as a function
of s.
0%
5%
10%
15%
20%
GOP Length
Storage
Increase
Figure
8: Relative increase in storage as a function
of N .
Switching Between Normal and Scan Versions
In this section, we describe how switching between versions is used to support various FS-related
operations. The notation that we use to specify a frame consists of a letter for the frame type and
a number that indicates the logical time of that frame (i.e., the time relative to the events in the
movie). This convention applies to all versions. Thus, B16 in a scan version is a B frame that is
obtained by encoding the 16th raw frame of the original movie. This B frame is not necessarily the
16th frame in the temporal order of that scan version (for example, if is the 8th
frame in the temporal order of the scan version).
4.1 Operation During Normal Playback
Because of the interpolative nature of B compression, the decoding of a B frame depends on two
reference frames (I or P ), both of which must be transmitted and decoded before the B frame is
decoded. One of these reference frames comes after the B frame in the temporal order. To enable
continuous playback at the receiver, MPEG frames are transmitted over the network according to
their decoding order. Thus, the transmission (and decoding) order of an MPEG sequence is different
from its temporal (playback) order. An example of the temporal and transmission orders of a normal
version is shown in Figure 9. In order to decode frames B2 and B3, both I1 and P4 must be first
transmitted and decoded. The process of transmitting, decoding, and displaying frames proceeds as
follows. Starting at time time unit is taken as one frame period), the server transmits
frames according to their transmission order. Ignoring network delays for the time being, the decoder
receives and decodes frame I1 during the time interval [0; 1). It maintains an uncompressed copy of
this frame to be used in decoding subsequent frames. During the interval [1; 2), frame P4 is received,
decoded, and stored in the frame buffer (note that P4 is decoded with reference to the uncompressed
version of I1). Playback starts with I1, which is displayed in the interval [2; 3), two time units after
it was received. During the same interval, B2 is received and decoded. During the interval [3; 4), B2
is displayed, and B3 is received and decoded. During the interval [4; 5), B3 is displayed, and I7 is
received, decoded, and stored in one of the two frame buffers (at this point the decoder discards I1).
In the subsequent interval, P4 (which has already been received and decoded) is displayed, and B5
is received and decoded using the uncompressed P4 and I7 frames that are in the frame buffer. And
so on. In the above discussion, we have assumed that a frame is received and decoded in one time
unit.
(a) Temporal order
(b) Transmission order
Figure
9: Temporal and transmission orders of a normal version
The decoder maintains a two-frame buffer, which contains the two most recently decoded reference
frames. An incoming P frame is decoded with reference to the most recent of these two, while an
incoming B frame is decoded with reference to both of them.
4.2 Switching From Normal Playback to FFS
Interactive FS operations are implemented at the server by switching between the normal version and
one or more scan versions. Switching from one version to another is performed on-line, in response
to a client request. In this section, we describe how switching is used to implement FFS.
Similar to the situation during normal playback, the frames of the scan version have a different
transmission order than their temporal order (in the case of FFS, the temporal order of a scan version
is the same as its playback order). Figure 10 depicts the temporal and transmission orders of a scan
version with 2. Two successive I frames in a scan version differ in their logical times by sN
frame periods (the logical time is the time relative to the events in the movie).
(a) Temporal order
(b) Transmission order
Figure
10: Temporal and transmission orders of a scan version
To maintain the GOP periodicity, switching from a normal to a scan version must take place
at an I frame. Furthermore, to enable correct decoding of all P and B frames of both normal and
scan versions, this I frame must be common to both versions. When the FFS request arrives at the
server, the server continues to send frames from the normal version up to (and excluding) the first
P frame that follows a common I frame. From that point and on, the server switches to the scan
version. The example in Figure 11 illustrates the idea. In this example, we use the normal and scan
versions of Figures 9 and 10, respectively. A FFS request arrives at the server after P16 of the normal
version has just been transmitted. In this case, the server continues to send frames from the normal
version up to (and including) B24. This essentially corresponds to continuing to play back frames
from the normal version until the next common I frame (I25). After that, the server switches to the
scan version, starting with P31, B27, B29, etc. Frame P31 is decoded using I25, which is common
to both versions. This example gives the worst-case latency, in which the receiver continues normal
playback sN frame periods from the time the FFS request is issued. Assuming that each GOP of the
normal version corresponds to half of a second, the worst-case latency is s=2 seconds (the average
latency is s=4 seconds). In requesting FFS, the client is trying to advance fast in the movie. Thus,
extending normal playback by few seconds prior to initiating FFS is acceptable. Note that there is
no disruption in the playback during the transition from normal to FFS. The switching operation is
transparent to the decoder.
normal version a ' scan version a
Received
Displayed B9 P10 B11 B12 I13 B14
" (normal playback) 7\Gamma!
FFS request start FFS
Figure
11: Switching from normal playback to FFS.
4.3 Switching From FFS to Normal Playback
We present two different approaches for switching from FFS to normal playback. The first approach
is very similar to the one used to switch from normal playback to FFS. Upon receiving a request for
normal playback, the server continues to send frames from the scan version until the next common
I frame (it also sends the B frames of the scan version that precedes this I frame in the temporal
order, but comes after it in the transmission order). After that, the server switches to the normal
version. This approach is illustrated in the example in Figure 12
the movie is in a FFS mode when the client requests normal playback during the display of frame
B23. At that point, the server has just transmitted P31. Ideally, the receiver should proceed by
playing back frames with indices 24, 25, 26, etc. To decode B24 the decoder needs P22 of the normal
version. However, if the server transmits P22, this frame will eventually be played back, causing
undesirable artifacts. Neither can the server start from P28 of the normal version, since the decoder
will incorrectly decode this frame with reference to P31 of the scan version. To ensure that switching
to normal playback is done with all received frames being decoded properly, with no artifacts, and
without any modifications to the normal operation of the decoder, the server must continue to send
frames from the scan version until the next common I frame (I37). After that, it switches to the
normal version, starting with P40, which can be decoded properly using I37. This means that FFS
will be extended beyond the point at which normal playback was requested. In the worst case, normal
playback is resumed at a logical time that is s=2 seconds from the logical time of the normal-to-FFS
request (since N frames of the scan version correspond to seconds worth of video).
However, it takes a maximum of only 1/2 second of real time to reach the appropriate switching
point.
scan version a ' normal version a
Received & Decoded I25 B21 B23 P31 B27 B29 I37 B33 B35 P40 B38 B39 I43
Displayed B17 P19 B21 B23 I25 B27 B29 P31 B33 B35 I37 B38 B39
normal playback request start normal playback
Figure
12: First approach for switching from FFS to normal playback.
The above approach has the advantage of being transparent to the decoder (i.e., the decoder need
not know that a switch from a scan version to the normal version has occurred). However, it has
the disadvantage that normal playback resumes at a later point than requested, with a worst-case
difference of s=2 seconds of movie time. To reduce this extra FFS, we introduce a second switching
approach, in which normal playback resumes from the subsequent I frame of the normal version.
This I frame is not necessarily common to both scan and normal versions. We will describe this
approach with reference to the example in Figure 13 (N As shown in the
figure, the client requests resume-playback while B27 of the scan version is being displayed. Normal
playback is resumed starting from the subsequent I frame of the normal version whose logical time is
closest to the logical time of the resume-playback request. In our example, this frame is I31. During
the switching process, the client continues to display frames from the scan version (i.e., extended
FFS) until the frame with the closest logical time to I31. In our example, this frame is P31. Frame
P31 is never displayed, but is only used to decode B27 and B29 of the scan version. After receiving
and displaying I31 of the normal version, the movie pauses at that frame until P34 of the normal
version has been received and decoded (since two reference frames are needed to stream the playback
process). During the pause period, the decoder ignores any frames sent by the server (in Figure 13
such frames are represented by 'X' for `don't care'). Of course, a mechanism is needed to inform the
decoder when to start accepting and decoding incoming frames. Such a mechanism will be described
in a later section. Note that this switching approach is not transparent to the decoder.
It can be shown that the logical time of the last displayed frame from the scan version is no farther
than s frame periods from the logical time of the subsequent I frame of the normal version. Thus,
in the transition from the scan version to the normal version, the speedup of the motion picture is
somewhere between normal playback and FFS (in our example, B29 of the scan version was followed
by I31 of the normal version, and the transition appears as a continuation of FFS).
In the second FFS-to-normal switching approach, normal playback resumes at a logical time that
is no farther than 1=2 second (N frame periods) from the logical time of the FFS-to-normal request.
This is compared to s=2 seconds in the first approach. The maximum time that a client has to wait
for before normal playback is resumed is N=s frame periods of extended
FFS and M frame periods of pause. This amounts to 1=2s +M=2N seconds, which is less than one
second.
scan version a ' normal version a
Received & Decoded P31 B27 B29 I31 X X P34 B32 B33 I37 B35
Displayed B23 I25 B27 B29 I31 I31 I31 I31 B32 B33 P34
normal playback start normal playback
request
Figure
13: Second approach for switching from FFS to normal playback.
4.4 Switching From Normal Playback to BFS
Instead of generating a distinct scan version for BFS, we use one scan version for both FFS and BFS
that have the same speedup. In this case, the dependency structure of an MPEG sequence must be
taken into account when transmitting frames during BFS and when decoding and displaying these
frames at the client side. We first consider switching from normal playback to backward playback,
which is a special case of BFS with
4.4.1 Normal Playback to Backward Playback
To implement backward playback (BPB), the server uses a different transmission order than the one
used during (forward) normal playback. After receiving a BPB request, the server initiates the BPB
operation starting from the subsequent reference frame (I or P) of the current GOP. Before initiating
BPB, the decoder must receive and decode the reference frames of the current and previous GOPs.
Consider the situation in Figure 14. The temporal order of the underlying MPEG sequence is shown
in Part (a) of the figure. Suppose that a BPB is issued during the playback of B36. The subsequent
reference frame that follows B36 in the temporal order is I37. Thus, the client continues normal
playback until the display of I37. Meanwhile, the server continues sending the frames of the normal
version that are needed to maintain normal playback until (and including) I37. After that, the server
starts sending the reference frames of the current and previous GOPs. A maximum of 2N=M reference
frames need to be decoded and stored in the frame buffer before BPB is initiated. In our example,
the client had already decoded and stored I37 when the BPB was issued. Hence, before initiating
BPB the decoder must receive and decode I28, P31, and P34 of the current GOP as well as I19 and
P22 of the previous GOP. While these frames are being transmitted and decoded, the movie pauses
for a maximum duration of two GOPs (one second). To minimize the pause duration, the following
guidelines are followed whenever possible: (1) reference frames of the present GOP are sent before
reference frames of the previous GOP, and (2) reference frames of a given GOP are sent according
to their decoding order. However, ensuring the GOP periodicity of the transmitted sequence is more
important that satisfying these two guidelines. Thus, these guidelines can be violated if necessary.
For example, in Figure 14(b) frames I28, P31, and P34 must be sent in this order according to the
first guideline. But the first available slot to send an I frame while maintaining the GOP periodicity
is time slot # 42, whereas the server can send a P frame during slot # 39. Thus, the server sends
during slot # 39 and I28 during slot # 42, violating the first guideline.
In the process of building up the reference frames at the decoder, the server need not send any
frames in between (otherwise, these B frames are ignored at the decoder). The resulting "empty"
slots in the transmission sequence are indicated by 'X's (for `don't care') in Figure 14(b). Once
the required reference frames are received and decoded, BPB can be initiated. Note that some P
frames are decoded M periods after they are received, so they must be temporarily stored in their
compressed format. After all the reference frames of the previous GOP have been received and
frames can be received, decoded, and displayed in the backward direction.
Clearly, the management of the frame buffer at the decoder must be modified to support BPB.
Instead of storing two reference frames, the decoder must store a maximum of 2N=M uncompressed
frames. A mechanism is needed to signal to the decoder that the transmitted reference frames are
for BPB. Such a mechanism will be described in Section 5. Once the decoder receives an indication
that BPB has been requested, it modifies its management of the frame buffer to accommodate up to
uncompressed frames. Figure 15 depicts the change in the content of the frame buffer in our
example. Note that the BPB request was issued during the decoding of P40, which under normal
playback replaces P34. Thus, by the time the decoder starts modifying its buffer management, the
uncompressed P34 (or parts of it) has already been discarded, and P34 must be retransmitted.
4.4.2 Normal Playback to BFS
In addition to reversing the playback direction, a BFS request also involves switching from the normal
to the scan version. In this section, we present two different approaches to supporting BFS.
First Approach
In this approach, BFS is initiated at a common I frame. Following a BFS request, normal playback
continues until the display of a common I frame. Thus, the server continues sending frames from
(a) Temporal order of normal version
Received I37 B35 B36
Decoded I37 B35 B36 P40 - I28 -
Displayed B33 P34 B35 B36 I37 I37 I37 I37 I37 I37 I37 I37
Time 43 44
(normal
request
Received
Decoded P31 - P34 - I19 B36 B35 P22 B33 B32
Displayed I37 I37 I37 I37 I37 I37 I37 I37 B36 B35 P34 B33
Time
(pause) 7\Gamma!
start BPB
Received P13
Decoded P25
Displayed B32 P31
Time 57 58 59
(continue BPB)
(b) Received, decoded, and displayed frames.
Figure
14: Switching from normal playback to backward playback.
Time Uncompressed frames in buffer
36 P40, I37
37 I37 (P40 is discarded)
42 I28, I37
48 P34, P31, I28, I37
54 P22, I19, P34, P31, I28, I37
57 P25, P22, I19, P34, P31, I28
Figure
15: Content of the frame buffer during the transition from normal playback to BPB.
the normal version for a short period following the receipt of a BFS request. After that, the server
switches to the scan version. As in the case of normal-to-BPB, the server must first send all or
most of the reference frames of the current and previous GOPs of the scan version (for a maximum
of 2N=M reference frames). Consider the example in Figure 16, in which the scan version has the
following parameters: 2. A BFS request is issued during the playback
of frame B72 of the normal version. Normal playback continues until the display of I73; the first
common I frame. The server receives the BFS request during the transmission of P76. Thus, all
the frames that precede I73 in the playback order have already been sent. The server switches to
the scan version and starts sending the reference frames of the current and previous GOPs. In this
example, frames I37, I55, P61, and P67 must be received and decoded before BFS is initiated (in
general, BFS cannot be initiated before all frames of the current GOP and one or more frames of
the previous GOP are decoded). These frames are transmitted following the same guidelines that
are used to support BPB. Thus, the transmission order is P61, I55, P67, and I37, while the decoding
order is I55, P61, P67, and I37. As in the case of BPB, no B frames need to be transmitted during
the buildup of the reference frames. Also, some P frames are decoded few frame periods after they
are received, so they must be temporarily stored in their compressed format. During the buildup
of the reference frames, the movie pauses at frame 73. Once frame I37 is received and decoded,
the transmission, decoding, and reverse playback of the scan version can be streamed, and BFS is
initiated. Part (c) of Figure 16 depicts the change in the content of the frame buffer.
In the previous example, the BFS request was issued just before the display of a common I
frame, which is a rather best-case scenario. The worst-case scenario occurs when BFS is requested
just after the display of a common I frame. In this case, normal playback continues for an additional
sN frame periods (s=2 seconds) until the next common I frame is encountered. This extra normal
playback is followed by a pause period that lasts for no more that two GOPs (one second), during
which reference frames are being accumulated in the decoder.
Second Approach
An alternative approach is to initiate BFS from the "closest" reference frame (I or P) of the scan
version. When a BFS request is issued, the movie pauses immediately at the currently displayed
frame (which could be of any type). The client identifies the reference frame of the scan version
that has the closest logical time to (but no larger than) the logical time of the currently displayed
frame. BFS is initiated from this reference frame. When the server receives the BFS request, it starts
sending the reference frames of the last two GOPs up to (and including) the designated reference
frame. Thereafter, the process is similar to the one used in the first BFS approach.
As an example, consider the situation in Figure 17. Since the BFS request is issued during the
playback of B80, the movie pauses at that frame. The reference frame of the scan version that is
(a) Temporal order of the scan version
Received
Decoded
Displayed
Time
(normal
BFS request
Received
Decoded P61 - P67 - I37 B71 B69 P43 B65 B63 P49
Displayed I73 I73 I73 I73 I73 I73 I73 I73 B71 B69 P67 B65 B63
Time 81
(pause) 7\Gamma!
start BFS
Received
Decoded
Displayed P61 B59 B57 I55 B53 B51
Time 94 95 96 97
(continue BFS)
(b) Received, decoded, and played frames.
Time Uncompressed frames in buffer
71 I73, P70
72 P76, I73
73 I73 (P76 is discarded)
78 I55, I73
84 P67, P61, I55, I73
87 I37, P67, P61, I55, I73
90 P43, I37, P67, P61, I55 (I73 is discarded)
93 P49, P43, I37, P61, I55 (P67 is discarded)
(c) Content of the frame buffer at the decoder.
Figure
First approach for switching from normal playback to BFS.
closest (in logical time) to the current logical time is P79. Thus, BFS will be initiated from P79. But
before that, the decoder must receive P79, P61, I73, P67, P43, I55, and P49 (in this order); and must
decode I73, P79, I55, P61 (in this order). Once this is done, the process of transmitting, decoding,
and displaying frames for the purpose of BFS can be streamed, similar to the first approach. The
maximum duration of the pause period is given by periods (one second and M=2N
of a second), which is independent of s. This is slightly higher than the worst-case pause period in
the first approach, but there is no extra normal playback as in the first approach.
Received
Decoded B78 I82 B80 B81 - I73 -
Displayed B77 B78 P79 B80 B80 B80 B80 B80 B80 B80 B80 B80 B80
(normal
BFS request
Received
Decoded P79 - I55 - P61 B77 B75 P67
Displayed B80 B80 B80 B80 B80 B80 B80 B80 B80 B80 P79 B77 B75
(pause) 7\Gamma!
start BFS
Received
Decoded
Displayed I73 B71 B69 P67 B65 B63 P61 B59 B57
(continue BFS)
Figure
17: Second approach for switching from normal playback to BFS.
4.5 Switching From BFS to Normal Playback
The easiest way to resume normal playback following BFS is to initiate the normal playback from
an I frame that is common to both the normal and the scan versions (this is analogous to the
first approach for switching from normal playback to BFS). Thus, when the client requests normal
playback, the movie remains in the BFS mode until a common I frame is encountered. At worst,
normal playback is resumed at a logical point that is s=2 seconds (in movie time) from the logical
time at which the resume was requested, but it takes only a maximum of 1=2 second to reach this
common I frame (each GOP of the scan version corresponds to a sampled video segment of duration
seconds. However, it takes only 1=2 second to play back this GOP). Upon receiving
the resume-playback request, the server switches to the normal version, starting from the common I
frame. Since two reference frames are needed in the frame buffer to stream the decoding process, the
movie pauses at the common I frame for no more that M periods (M=2N of a second). This pause
is needed to decode the P frame of the normal version that follows the common I frame. After that,
normal playback can be resumed. The switching process is illustrated in the example in Figure 18.
Other forms of interactivity include switching between FFS and BFS without going through
Received I37 B65 B63 P31 B59 B57 P7 B53 B51 I19 B47 B45 X
Decoded I37 B65 B63 P43 B59 B57 P49 B53 B51 I19 B47 B45 -
Displayed B69 P67 B65 B63 P61 B59 B57 I55 B53 B51 P49 B47 B45
normal playback request
Received
Decoded
Displayed P43 B41 B39 I37 I37 I37 I37 B38 B39 P40
(continue BFS) (pause) 7\Gamma!
start normal playback
Figure
18: Switching from BFS to normal playback.
Requested Operation Switching Delay
(in seconds)
(1st
FFS-to-normal (1st approach) 1=2
FFS-to-normal (2nd approach) 1=2s +M=2N
BPB-to-normal 1=2 +M=2N
BFS-to-normal 1=2 +M=2N
Table
1: Worst-case switching delay associated with various interactive operations.
normal playback. Also, if the VOD system supports multiple FS speedups, switching can take place
between two scan versions that have different speedups. These and other scenarios can be dealt with
using similar approaches to the ones we described (with slight modifications to fit the specifics of
each scenario). Due to space limitations, we do not elaborate further on these scenarios.
We define the latency of an interactive operation as the difference between its time of request
at the client side and its initiation time on the client's display device. This latency measures the
actual waiting time of the client. It consists of: (1) roundtrip propagation time (RTT) between
the client and the server, (2) processing delay at the server, and (3) "switching delay", which is
the delay caused by switching from one version to another (it includes the time needed to reach an
appropriate switching point and the time needed to build up the frame buffer in BFS operations).
The second component of the latency is relatively small, and can be ignored. The RTT depends
on the underlying network topology. In [28] the authors report one-way propagation delays from
to 50 milliseconds for a wide-area ATM network, and less than 10 milliseconds for ATM LAN
connections. Table 1 summarizes the worst-case switching delay for various types of requests. This
delay is measured in real time (not the logical time of the movie).
For typical values of N , M , and s (say, the worst-case switching
delay associated with common interactive operations ranges from a fraction of a second to three
seconds. This delay can be further reduced by using a smaller GOP length (N) or by reducing s.
However, reducing the skip factor will increase the storage requirement of the scan version (since
more frames are generated), while reducing the GOP length will increase the storage needed for the
normal and the scan versions and will potentially reduce the efficiency of the underlying envelope-
based scheduling mechanism. Tuning the above parameters requires careful consideration of the
involved tradeoffs.
5 Signalling
Signalling between the client and the server must be extended to allow the decoder to distinguish
between various versions. For this purpose, we use an in-band signalling mechanism based on the
user-data field in the header of an MPEG frame. Each frame carries in its header the value of the
skip factor and the playback direction (forward or backward). This information can be conveyed
using one byte in the user-data field. The most significant bit of this byte encodes the playback
direction while the other seven bits encode the skip factor (in fact, four bits are enough to represent
skip factors from the remaining three bits can be used to convey other types of
information). For P and B frames of a given version, the skip factor is inserted in the frame header
during the encoding of that version. In contrast, for I frames the skip factor is inserted during
transmission since some of these frames are common between two or more versions. For all frames,
the playback direction is added on the fly during the transmission of the MPEG stream. This can be
done efficiently since user data in the frame header are byte aligned and are located at a fixed offset
from the beginning of the MPEG frame. The server can insert user-data bytes with minimal parsing
of the MPEG stream. Information about the GOP structure of a version is included in the sequence
header and in the header of the first GOP. This information can be used to allocate memory for the
frame buffer during the initial signalling phase.
6 Disk Scheduling
Compressed videos are typically stored on disk in units of retrieval blocks, where each block consists
of one or more consecutive GOPs. Our stream switching scheme requires storing multiple versions
of the same MPEG movie. A straightforward approach is to store each version separately as a
self-contained MPEG stream. The main disadvantage of this approach is that it wastes some disk
space by separately maintaining the I frames of each scan version (although these frames are already
included in the normal version). If the cost of disk space is not a major issue, then this storage
approach is preferred for its simplicity. Otherwise, the duplicate I frames can be eliminated, and
a method for "intermixing" the versions of the same movie is needed. This can be accomplished if
the structure of the retrieval block is extended so that it can accommodate all the scan versions of a
movie. Let s be the k skip factors supported by the system, with
Let s lcm be the least common multiple of these skip factors. Each block consists of s lcm GOPs from
the normal version plus s lcm =s i GOPs from the ith scan version, for all This way, each
block contains portions of the normal and scan versions that correspond to the same segment of the
movie. The resulting block consists of (s lcm
frames. Excessively large block
sizes can be avoided by appropriate choice of the skip factors so that their least common multiple
is small. Frames within a block are organized as follows. First, the I frames of the s lcm GOPs of
the normal version are put at the beginning of the block (no separate I frames are generated for the
scan versions). They are followed by P and B frames of the same version, then P and B frames of
the first scan version (preferably the one with the smaller skip factor), then P and B frames of the
next scan version, and so on until the frames of all scan versions are included. This structure allows
for efficient disk access since related data are stored consecutively on disk and no extra disk-head
movements are needed to access "out-of-stream" I frames.
In the above scheme, the frames in a block are not ordered according to their transmission
order. So they have to be rearranged before being sent over the network. Such a rearrangement can
be efficiently achieved by allowing the envelope-scheduling module (which is responsible for sending
frames to the network) to have random access to buffered frames. In this way, any transmission order
can be achieved without data movement in memory. In order to be able to manipulate individual
frames, knowledge of the location of each frame within the retrieval block is necessary. This is
accomplished by associating a small directory of indices with each retrieval block. The directory can
be computed when the movie is initially stored on disk and can be maintained in a main memory
database since its size will be small.
During playback of the normal version, only the first part of the retrieval block (which includes
I , P and B frames of the normal version) is retrieved from disk, with no waste of I/O bandwidth.
When a scan version is to be retrieved, two alternatives exist. The first alternative is to read the
whole block and discard frames that do not belong to the target scan version. The other alternative is
to read only the frames of the target scan version in two reads: one for the I frames at the beginning
of the block and one for the P and B frames of the target scan version. The directory associated
with the block is used to locate the appropriate frames inside the retrieval block. The first approach
is simpler but wastes I/O bandwidth during FS periods. The second alternative requires two reads
per block but eliminates the waste in I/O bandwidth, especially when there are several scan versions
per movie.
Another issue is the placement of blocks within the disk subsystem. In a multi-disk system,
blocks are typically striped among different disks in order to maximize the disk throughput and
balance its I/O load. Examples of striping schemes can be found in [32, 5, 10]. A conventional block
placement approach such as the ones in [19] can be easily adapted to our framework. In particular, if
the retrieval block is composed of frames from all versions of the movie, then the resulting composite
stream is striped similar to a typical MPEG stream. On the other hand, if different versions of
the movie are stored independently, then each version can be placed on disk independently using,
for example, one of the algorithms in [19]. Finally, block retrieval during playback is performed
using algorithms that attempt to minimize disk head movements. We handle block retrieval with
the SCAN algorithm [14], which sorts the blocks to be retrieved by cylinder location. Blocks that
are at the outermost cylinders are serviced first as the head moves towards the innermost cylinders.
7 Implementation Issues
The feasibility of our stream scheduling and multiplexing approach was demonstrated in [20] using
a specific hardware setup. We now briefly discuss general implementation issues related to this
approach. Since our approach relies on time-varying envelopes, timing considerations are crucial to
its operation. For this purpose, two important modules must be implemented at the video server:
stream manager and envelope scheduler. Both modules coordinate their operation with the disk
scheduler that is used for prefetching video blocks.
7.1 Stream Manager
The main purpose of this module is to handle client requests for new movies as well as requests for
interactive operations. In its simplest form, the stream manager consists of a user-level process. This
process establishes a bandwidth pipe to the destination switch, and then waits indefinitely for client
requests. (e.g., 'listens' at a given port). When a request for a new video arrives at the server, the
stream manager inquires the envelope scheduling module about the admissibility of the new stream.
It then provides information to the disk I/O subsystem on how to retrieve the movie data and place
them in the buffers of the envelope scheduling module. In the case of a FS request, the stream
manager is responsible for adding the stream switching information that are needed by the client
decoder (e.g., speedup and direction of playback).
7.2 Envelope Scheduler
This module is responsible for envelope-based stream scheduling, multiplexing, and admission control.
Upon receiving a request for a new stream, the envelope scheduler computes the best phase for
scheduling this stream. For this purpose, it maintains a a "bandwidth table" of dimension n \Theta e
where n is the number of ongoing streams. Each row describes the traffic envelope of one active
stream, taking its relative phase into account. An additional row is needed to give the aggregate
bandwidth in each of the e
N successive time slots. From the bandwidth table and the envelope of the
prospective stream, the envelope scheduler can easily determine the best phase for the new stream
and the associated bandwidth. Similarly, it can check for the admissibility of the new stream. If the
stream if found admissible, the envelope scheduler updates the bandwidth table by incorporating
the envelope of the new stream. An analogous procedure is used when an ongoing stream is to be
terminated.
Once a request is accepted, data can be retrieved from the disk subsystem in units of blocks. The
block size depends on the underlying striping mechanism, but typically consists of several GOPs.
Ideally, we would like to retrieve data on a frame-by-frame basis, but this level of fine granularity is
not feasible in current disk systems. The envelope scheduler maintains a per-stream buffer space that
is used to temporarily store the retrieved data. Statistical multiplexing is implemented in software
using a high-priority process that executes periodically every 1=f seconds, where f is the frame rate
formatted video). At the start of each period, this process reads one frame
from every per-stream buffer, and sends these frames over the network. Clearly, the timeliness of
this process impacts the effectiveness of our multiplexing approach. This timeliness can be easily
ensured in a real-time operating system (OS). For other OS's that do not support near-deterministic
execution of processing tasks, various approaches can be used to increase the priority of the process
performing the multiplexing task. For example, in some flavors of Unix it is possible to assign
a negative priority to this process, giving it higher priority in execution over all other user-level
processes. Another possibility is to implement this process as part of the OS kernel, which gives
the process a high priority and ensures its timeliness. Such an approach was used in the design of
the Stony Brook server [31], which was based on the FreeBSD 2.4 Unix OS. Yet, another approach
to performing the multiplexing process is to implement this process in the network device driver
[20]. In [20] the multiplexing process was implemented in the device driver of the ATM network
adaptor card (NIC). Communications between the stream manager (a user-level process) and the
device driver were provided by an extended set of system calls that are derived from the UNIX
system call.
8 Comparison with Other Schemes
In this section, we compare our FS approach to the following approaches: (1) multicast-based stream
switching [2], (2) contingency-channel-based [12], (3) the Stony Brook server [31], (4) prefetching
[26], (5) GOP-skipping [8], (6) partial-GOP-skipping [27], and (7) skipping B/P frames [7]. A brief
discussion of these approaches was given in Section 1. The comparison is performed with respect
to the factors in the first column of Table 2. Because of the difficulty to quantify certain factors
and the lack of detailed information about certain FS approaches, we contend with a qualitative
comparison in which a scheme is given one of three grades for each examined factor: (G)ood, (F)air,
or (P)oor. The comparison is only meant to convey the tradeoffs provided by different schemes. We
now comment on the examined factors.
Video data are retrieved from disk in units of blocks, which are temporarily stored in the server's
main memory before being sent over the network. Therefore, the memory requirement at the server
depends on the block size. This, in turn, depends on the underlying disk scheduling approach. In our
scheme, there are different ways for storing scan versions on disk. When scan versions are intermixed
with the normal version so that each block is composed of GOPs from both, the block size will be
relatively large, resulting in large server-memory requirement. Other schemes will generally have
smaller block sizes than ours.
Client resources refer to the memory and CPU requirements that are needed to process and decode
a received frame. To provide backward FS operations, our scheme requires buffering a maximum of
reference frames. This is larger than post-encoding frame skipping schemes (which require
the buffering of two reference frames only), but lower than the prefetching approach in which a large
amount of video data must be prefetched into the client's set-top box. Also, the client processing
requirement in our scheme is lower than that of the contingency-channel scheme, in which the client
has to decode and display data at multiple times the normal playback rate.
Bit-rate control refers to the flexibility in trading off the visual quality during FS for a lower
bit rate. With regard to this factor, partial-dropping schemes perform poorly since only a limited
reduction in the bit-rate is achievable during FS. In contrast, stream-switching schemes (ours and
Stony Brook's) achieve good degree of bandwidth control since they both use pre-encoded scan
versions. Between the two, our scheme provides tighter control on the resulting bit rate. The bit
rate injected into the network can also be controlled, to some extent, in the prefetching approach.
Visual quality includes the quality of the displayed video during FS operations, the continuity
of this video (i.e., amount of disruption due to video gaps), and any artifacts caused by the delay
in the initiation of an interactive operation. These factors are hard to measure quantitatively. In
general, we expect stream-switching schemes to give better performance than single-stream schemes.
Compared to the Stony Brook's approach [31], our scheme is expected to result in better visual
quality during backward FS periods (the latter scheme requires modifying the motion vectors of
the backward scan versions). Other frame-skipping schemes result in progressively worse quality,
as larger parts of the MPEG stream are skipped. Prefetching and contingency-channel approaches
result in very good visual quality at the expense of extra client memory and network bandwidth.
Performance guarantees refer to mathematically proven bounds on the response time of an interactive
operation. The response time is the duration from the instant the client issues a FS request
until FS is initiated at the client display. It includes both transport and processing delays. Only our
scheme is capable of providing such bounds.
Functionality refers to the flexibility in supporting FS requests (e.g., number of speedups, duration
of a FS period, allowable sequence of interactive operations, etc. Our scheme and the Stony Brook
scheme achieve high functionality since they do not impose any limitation on the time and durations
of the FS operations. In the prefetching approach, the duration of a FS operation is limited by the
size of the memory of the set-top box (which typically holds a small portion of the video movie).
Post-encoding frame-skipping schemes provide a limited number of FS speedups. In the contingency-
channel approach, the interactive operation may be denied if many users are in the interactive mode
(i.e., interactivity is supported only on a statistical basis). In the multicast approach, short FS
periods are supported using the locally stored data. Extended FS requires switching to a different
multicast group (with a different logical playback time). In general, interactivity is more difficult to
support in the multicast approach.
In terms of the required network bandwidth for FS operations, our approach uses almost the same
amount of bandwidth that is needed for normal playback. The per-stream bandwidth during FS
operations is also small in the multicast and the contingency-channel approaches (its value depends
on the number of active sources). In the prefetching approach, if FS is supported locally, then no
extra network bandwidth is needed for FS operations. Similarly, no extra bandwidth is required in
the GOP-dropping approach. Partial dropping schemes are less efficient in terms of FS bandwidth
requirement (e.g., dropping B frames causes the average bit rate of an MPEG sequence to increase
drastically).
The storage requirement is relatively high for schemes that use multiple copies per movie (ours
and Stony Brook's). If no duplication of I frames is done, then the storage overhead in our scheme
is less than that of Stony Brook's. All single-copy schemes have lower storage requirements.
In terms of the complexity of disk scheduling, schemes that involve skipping parts of a GOP
require a relatively complicated disk scheduling subsystem that carefully places data on disk, so
that the disk load is balanced during both normal and scan periods. Stream switching schemes also
require slightly more sophisticated disk scheduling to support switching between different copies. In
the contingency-channel scheme, the need to retrieve frames at multiple times the normal playback
rate further complicates the disk scheduling subsystem. In general, disk scheduling for interactive
VOD is inherently sophisticated because of the unpredicted pattern of client's interactivity.
Factor Scheme
Ours
Server Memory F G G G G G G G
Client Resources F G P G P G G G
Bit-Rate
Visual Quality G P G G G P P P
Performance Guarantees G
Network Bandwidth G G G F G G F P
Storage Requirement P G G P G G G G
Scheduling Complexity F G P F G G G G
Table
2: Comparison of different approaches to support FS operations.
9 Summary and Future Work
In this paper, we presented an approach for supporting interactive fast-scanning (FS) operations in
a VOD system. This approach is integrated into a previously proposed framework for distributing
archived, MPEG-coded video streams over a wide-area network. Scanning operations are supported
by generating multiple, differently encoded versions of each movie. In addition to a normal version
that is used for normal playback, several scan versions are maintained at the server. Each scan
version is obtained by encoding a sample of the raw frames, and is used to support both forward
and backward fast scanning at a given speedup. The server responds to a FS-related request by
switching from the currently transmitted version to another version. By proper encoding of the
scan versions, interactive scan operations can be supported with little or no extra bandwidth and
with the same decoding requirement of normal playback. This gain comes at the expense of small
storage overhead at the server and some variability in the quality of the motion picture during
the fast-scanning periods. Our scheme does not impose any restriction on the number, spacing, or
sequencing of interactive operations.
VOD clients should be given the flexibility to choose from a set of available VOD services that
offer different levels of interactivity. Billing would then be done based on the quality and flexibility
associated with the selected service. Our future work includes developing a multi-level QoS framework
for interactive VOD. Each level corresponds to a certain degree of interactivity, which could include
some limitations on the interactive functions (e.g., number of supported speedups, visual quality
during scanning, maximum duration of a scanning operation, etc.
--R
On optimal batching policies for video-on-demand servers
The use of multicast delivery to provide a scalable and interactive video-on-demand service
Metascheduling for continuous media.
Audiovisual multimedia services: Video on demand specification 1.1
An open-systems approach to video on demand with VCR like functions
A scalable video-on-demand service for the provision of VCR-like functions
Support for fully interactive playout in a disk-array-based video server
Storage and retrieval methods to support fully interactive playout in a disk-array based video server
Scheduling policies for an on-demand video server with batching
Providing VCR capabilities in large-scale video servers
A comparison of bandwidth smoothing techniques for the transmission of prerecorded compressed video.
Multimedia storage servers: A tutorial.
Adaptive piggybacking: A novel technique for data sharing in video-on-demand storage servers
VBR video over ATM: Reducing network resource requirements through endsystem traffic shaping.
ISO/MPEG II.
Baseband and passband transport systems for interactive video services.
Evaluation of video layout strategies for a high-performance storage server
Bandwidth allocation and admission control schemes for the distribution of MPEG streams in VOD systems.
Exploiting the temporal structure of MPEG video for the reduction of bandwidth requirements.
Impact of video scheduling on bandwidth allocation for multiplexed MPEG streams.
An algorithm for lossless smoothing for MPEG video.
Performance model of interactive video-on-demand systems
The split and merge (SAM) protocol for interactive video-on-demand systems
Bandwidth renegotiation for VBR video over ATM networks.
Supporting stored video: Reducing rate variability and end-to-end resource requirements through optimal smoothing
Efficient support for scan operations in video servers.
Adventures in building the Stony Brook video server.
Efficient transport of stored video using stream scheduling and window-based traffic envelopes
Design and analysis of a look-ahead scheduling scheme to support pause-resume for video-on-demand application
--TR
Metascheduling for continuous media
An algorithm for lossless smoothing of MPEG video
Staggered striping in multimedia information systems
Scheduling policies for an on-demand video server with batching
Providing VCR capabilities in large-scale video servers
Support for fully interactive playout in disk-array-based video server
Evaluating video layout strategies for a high-performance storage server
Efficient support for scan operations in video servers
Design and analysis of a look-ahead scheduling scheme to support pause-resume for video-on-demand applications
Storage and retrieval methods to support fully interactive playout in a disk-array-based video server
Adaptive piggybacking
Supporting stored video
Adventures in building the Stony Brook video server
Impact of video scheduling on band width allocation for multiplexed MPEG streams
Multimedia Storage Servers
A Low-Cost Storage Server for Movie on Demand Databases
VBR Video over ATM
A Comparison of Bandwidth Smoothing Techniques for the Transmission of Prerecorded Compressed Video
Exploiting the Temporal Structure of MPEG Video for the Reduction of Bandwidth Requirements
The Split and Merge (SAM) Protocol for Interactive Video-on-Demand Systems
Pipelined Disk Arrays for Digital Movie Retrieval
A Scalable Video-on-Demand Service for the Provision of VCR-Like Functions
Optimizing the Placement of Multimedia Objects on Disk Array
--CTR
Kostas E. Psannis , Yutaka Ishibashi, MPEG-4 interactive video streaming over wireless networks, Proceedings of the 9th WSEAS International Conference on Computers, p.1-7, July 14-16, 2005, Athens, Greece | video scheduling;MPEG;scanning operations;interactive video-on-demand |
329345 | Providing QoS guarantees for disk I/O. | In this paper, we address the problem of providing different levels of performance guarantees or quality ofservice for disk I/O. We classify disk requests into threecategories based on the provided level of service. We propose an integrated scheme that provides different levels ofperformance guarantees in a single system. We propose andevaluate a mechanism for providing deterministic servicefor variable-bit-rate streams at the disk. We will show that,through proper admission control and bandwidth allocation,requests in different categories can be ensured of performance guarantees without getting impacted by requests inother categories. We evaluate the impact of scheduling policy decisions on the provided service. We also quantify theimprovements in stream throughput possible by using statistical guarantees instead of deterministic guarantees in thecontext of the proposed approach. | Introduction
System level support of continuous media has been receiving wide attention. Continuous media
impose timing requirements on the retrieval and delivery of data unlike traditional data such as
text and images. Timely retrieval and delivery of data requires that the system and network pay
attention to notions of time and deadlines. Data retrieval is handled by the I/O system (File system,
disk drivers, disks etc.) and the delivery is handled by the network system (network software and
the network). In this paper, we will look at the data retrieval problem.
Different levels of service can be provided for continuous media. Deterministic service provides
guarantees that the required data will be retrieved in time. Statistical service provides statistical
guarantees about data retrieval, e.g., 99% of the requested blocks will be retrieved in time. Data
streams can be classified as Constant Bit Rate (CBR) or Variable Bit Rate (VBR) depending on
whether the stream requests the same amount of data in each interval.
A storage system will have to support requests with different performance requirements based
on the application needs. Continuous media applications may require deterministic performance
guarantees i.e., guarantee that a requested block will be available within a specified amount of time
continuously during the application's execution. A request from an interactive game or a request
to change the sequence of frames in a continuous media application may require that the request
have low response time i.e., may require a latency guarantee. A regular file request may only
require best-effort service but may require that a certain number of requests be served in a given
time i.e., may require a throughput guarantee. It may be desirable to provide both deterministic
service and statistical service to VBR streams in the same system. Deterministic service may be too
expensive on the system's resources. A user may request for statistical service when a request for
deterministic service may be denied due to lack of resources. There is a clear need for supporting
multiple levels of performance guarantees within the storage system. Several interesting questions
need to be addressed when multiple levels of QOS need to be supported in the same system: (a)
how to allocate and balance resources for the different QOS levels, (b) how to control and limit
the usage of resources to allocated levels, (c) how to schedule different requests to meet the desired
performance goals, (d) how do system level parameters and design decisions affect the different types
of requests and (e) how to tradeoff performance goals for higher throughput (for example, how much
throughput gain can be had with statistical guarantees rather than deterministic guarantees)?
Providing deterministic service at the disk is complicated by the random service time costs
involved in disk transfers (because of the random seek and latency overheads). This problem has
been addressed effectively by suitable disk scheduling policies [1, 2, 3, 4, 5, 6]. These scheduling
policies group a number of requests into rounds or batches and service the requests in a round
using a disk seek optimizing policy such as SCAN. Then the service time for the entire round can
be bounded to provide guarantees. This strategy works well with CBR streams. An evaluation
of tradeoffs in a media-on-demand server can be found in [7]. However, with VBR streams, the
workload changes from round to round and hence such an approach will have to consider the
variations in load for providing guarantees.
This paper addresses the problem of providing different levels of service for different classes of
requests in a single system. This paper makes the following two significant contributions: (1) an
integrated scheme is presented for providing different levels of performance guarantees to different
classes of requests. (2) a method is presented for providing deterministic guarantees for VBR
streams that exploits statistical multiplexing of resources. The paper also presents an evaluation
of tradeoffs in providing deterministic and statistical guarantees.
Section 2 discusses our approach for providing different levels of QOS in a single system.
Section 2 also proposes a method for providing deterministic service for VBR streams that allows
exploitation of statistical multiplexing across many request streams. Section 3 discusses some of
the other related issues such as data layout. Section 4 presents a performance evaluation of these
schemes based on trace-driven simulations. Section 5 summarizes our results and points out future
directions.
Performance Guarantees
In this paper, we consider three different categories of requests. Periodic requests require service
at regular intervals of time. Periodic requests model the behavior of video playback where data
is retrieved at regular intervals of time. Periodic requests can be either CBR or VBR. Interactive
requests require quick response from the I/O system. Interactive requests can be used to model the
behavior of change-of-sequence requests in an interactive video playback application or the requests
in an interactive video game. These requests arrive at irregular intervals of time. Aperiodic requests
are regular file requests. In this paper, we consider (a) deterministic or statistical guarantees for
periodic requests, (b) best-effort low response times for interactive requests and (c) guaranteed
minimum level of service or bandwidth guarantees for aperiodic requests.
Our approach to providing QOS guarantees at the disk is shown in Fig. 1. Disk bandwidth is
allocated appropriately among the different types of requests. Each category of requests employs an
Interactive
Request
Admission
Periodic
Request
Admission
Pool of Requests to be scheduled
Interactive
Requests
Requests
Periodic
Scheduled
Requests
Scheduler
Controller/
Scheduler
Controller/
Scheduler
Aperiodic
Request
Admission
Controller/
Scheduler
Aperiodic
Requests
Figure
1: Supporting multiple QOS levels.
admission controller to limit the disk utilization of these requests to their allocated level. To provide
throughput guarantees for aperiodic requests, we limit the allocated bandwidth for periodic and
interactive requests (! 100%) through admission control. Aperiodic requests utilize the remaining
disk bandwidth. Similar approaches have been independently proposed recently in [8, 9]. Both
these schemes employ a two-level scheduling approach as proposed here. The work in [8] shares
many of the motivations of our work. The scheduler in [9] doesn't support quick response to
interactive requests. Our work here also proposes a scheme for allowing statistical multiplexing of
VBR streams while providing deterministic guarantees for them.
The admission controllers employed for periodic requests and interactive requests depend on
the service provided for these requests. In the next section, we discuss how to provide deterministic
service for periodic requests. The proposed approach can be modified to implement statistical
guarantees for periodic requests as well. Interactive requests are treated as high-priority aperiodic
requests in our system. The scheduler and the admission controller are designed to provide low
response times for these requests. We use a leaky-bucket controller for interactive requests. A
bucket controller controls the burstiness of interactive requests (by allowing only a specified
number of requests in a given window of time) and thus limits the impact it may have on other
requests.
The admission controller for each class of requests controls the number of requests entering the
pool of requests and also the order in which the requests enter the pool. These controllers besides
enforcing the bandwidth allocations, control the policy for scheduling the requests in that class of
service. This can be generalized to a larger number of request classes, each with its own admission
controller/scheduler. The disk level scheduler schedules requests from the request pool to meet the
performance criteria of individual requests.
In our system, it is assumed that the requests are identified by their service type at the scheduler.
The scheduler is designed such that it is independent of the bandwidth allocations. This is done
such that the bandwidth allocation parameters or the admission controllers can be changed without
modifying the scheduler.
We first describe the overall functioning of the disk scheduler and then describe how admission
control is implemented for each class of requests.
2.1 Scheduling for multiple QOS levels
Since the different classes of requests do not have strict priorities over each other, priority scheduling
is not feasible. Periodic requests have to be given priority over others if they are close to missing
deadlines. But, if there is sufficient slack time, interactive requests can have higher priority such that
they can receive lower latencies. Periodic requests are available at the beginning of the round and
interactive and aperiodic requests arrive asynchronously at the disk. If periodic requests are given
higher priority and served first, aperiodic and interactive requests will experience long response
times at the beginning of a round until periodic requests are served. Moreover, it may be possible
to better optimize seeks if all the available requests are considered at once.
The disk scheduler uses a round based scheme for scheduling the requests from the candidate
pool. Each admission controller schedules the requests in its class and releases them as candidate
requests at the beginning of the round. Each admission controller ensures that its class doesn't take
any more time than allocated in a round. The disk scheduler combines the requests and serves them
together to meet performance goals of individual requests. If all the requests arrive at the beginning
of a round, the disk scheduler will not have to worry about deadlines since the admission controllers
enforce the time constraints. However, aperiodic and interactive requests arrive asynchronously.
To schedule these requests as they arrive (without waiting for the beginning of next round), the
disk scheduler uses the notion of a subperiod. The disk scheduler considers the available slack time
of periodic requests and adjusts the schedule to incorporate any arriving interactive and aperiodic
requests each subperiod.
The aperiodic requests are queued into two separate queues. The first queue holds requests
based on the minimum throughput guarantee provided to these requests. Scheduling these requests
will not violate any time constraints since these requests are within the allocated bandwidth. The
second queue holds any other requests waiting to be served. The scheduler considers the requests
from the second queue after periodic requests and interactive requests are served such that these
requests can utilize the unused disk bandwidth.
The scheduler merges the periodic requests and aperiodic requests (from queue 1) into a SCAN
order at the beginning of a round. These requests are then grouped into a number of subgroups
based on their location on the disk surface. The scheduler serves a subgroup of requests at a time.
Later arriving aperiodic requests (of queue 1) are, if possible, merged into remaining subgroups.
The scheduler considers serving interactive requests only at the beginning of a subgroup i.e., the
disk SCAN order is not disturbed within a subgroup. To provide quick response times for interactive
requests, the SCAN order may be disturbed at the end of subgroups. When possible, the scheduler
groups a waiting interactive request into the closest subgroup and serves that group next to minimize
the seek overhead in serving these requests. Interactive requests are queued on a first-come first-serve
basis to limit the maximum response time of a single request. If sufficient slack time exists,
waiting interactive requests are first served before moving to the next subgroup of requests. The
response times for interactive requests are hence determined by the burstiness of the interactive
requests and the size of the subperiod. The size of the subperiod can be decreased if tighter
latency guarantees are required. Arranging requests into subgroups also allows the scheduler to
communicate to the device driver in an efficient manner while ensuring that the requests are not
reordered by the scheduling algorithm within the disk drive. A more formal description of the
scheduler is given in Fig. 2.
Claim: If each group abides by the bandwidth allocation, i.e., service time for group
then the combined schedule for all the requests -
is the time allocation for group
within a round.
Proof: We only need to consider seek times since other components of the service time don't
change due to merging of requests. (Actually, the rotational latencies could change, but since we are
using worst-case estimates, they don't impact the estimates). Without loss of generality, consider
two groups of requests. Group 1 has requests a, b and Group 2 has requests c, d. There are two
possible cases: the groups overlap on the disk surface or not.
Case 1: Overlap The requests are as shown in Fig. 3 on the disk surface. The two groups
While(true)
f
Combine periodic and aperiodic1 requests into SCAN
Break the requests into subgroups;
service estimate for above requests;
while(not end of round)
f
pick first interactive request if any;
Combine with one of the remaining subgroups?
If (no)
f
while (estimated service time ! slack time)
service waiting interactive requests;
Serve the closest subgroup;
else serve the merged subgroup;
Adjust slack time;
while (slack time ?
f
Combine aperiodic1 requests into existing subgroups;
Adjust slack time;
if (all periodic requests served)
f
Continue serving interactive & aperiodic requests until end of round;
Figure
2: Semi-formal description of the scheduler.
a dDisk
head
Group 1: a,b
Group 2: c,d
Figure
3: Overlapping Request Groups
a b c dDisk
head
Group 1: a,b
Group 2: c,d
Figure
4: Nonoverlapping Request Groups
calculate the seek times as ab and S When merged, the seek times are
We need to show that S 1 merged or s 0a
the above is true.
Case 2: No Overlap The requests are as shown in Fig. 4 on the disk surface. The two groups
calculate the seek times as ab and S When merged, the seek times are
We need to show that S 1 merged or s 0a
which is clearly true since s 0c - s bc .
If all the requests arrived at the beginning of the round, the above claim is sufficient to
prove that guarantees will be met if the individual groups observe the bandwidth allocations.
Interactive requests disturb the SCAN schedule and hence are assumed to require worst-case seek
time such that servicing these requests won't violate the bandwidth allocations of other requests.
However, aperiodic and interactive requests arrive asynchronously. Hence, the slack times need to
be considered so as to not violate the deterministic guarantees for VBR streams while scheduling
these late arriving requests.
2.2 Deterministic guarantees for VBR streams
Providing deterministic service for VBR streams is complicated by the following factors: (i) the load
of a stream on the system varies from one round to the next, (ii) scheduling the first block doesn't
guarantee that the following blocks of the stream can be scheduled. To ensure that all the blocks
required by a stream can be retrieved, we can compute the peak rate of the stream and reserve
enough disk bandwidth to satisfy the peak requirements of the stream. Resource allocation based
on the peak demands of the stream will underutilize the disk bandwidth since the peak demand is
observed only for short durations compared to the length of the duration of the stream. However,
when many streams are served in the system, the peaks do not necessarily overlap with each other
and it may be possible to serve more steams than what is allowed by the peak-rate allocation. Can
we exploit this statistical multiplexing to increase the deterministic service provided by the system?
We propose an approach that allows the system to exploit statistical multiplexing while providing
deterministic service.
Disk service is broken into fixed size time units called rounds or batches. Each round may span
seconds of time ([1]). In our approach, an application requiring service for a VBR stream
supplies the I/O system with a trace of its I/O demand. This data could be based on frame rate
i.e., given on a frame to frame basis or could be more closely tied to the I/O system. Specifying
the load on a frame basis is more flexible and the application doesn't have to be aware of how the
I/O system is organized (block size or round size). If the I/O system's block size is known and the
duration of each round is known, then the trace can be compacted by specifying the I/O load on
a round by round basis in terms of the blocks. For example, a frame by frame trace may look like
83,888, 9,960, 10,008, 27,044, .which indicates the number of bits of data needed to display each
frame. If the round size is say 2 frames i.e., 1/12th of a second, and the I/O system uses a block size
of 4KB, then the compacted trace would have d(83888 in the first entry.
The second entry would have d(10008
block. Hence, the equivalent compacted trace for the stream would be 3, 1, . A 40,000 frame trace
of the movie "Silence of the Lambs" (24 frames/second) requires 203,285 bytes on a frame by frame
basis compared to a 3,333 byte description of the same movie when compacted with the knowledge
of the round size of 0.5 seconds and a block size of 8KB. It is assumed that this information is
available to the I/O system in either description and we will call this the demand trace. Compared
to the size of the movie file (about 1 GB for 90 minutes of MPEG-1 quality), the size of the demand
trace file is not very significant.
The I/O system itself keeps track of the worst-case time committed for service in each round
at each of its disks in the form of a load trace. Before a stream is admitted, its demand trace is
combined with the load trace of the appropriate disks to see if the load on any one of the disks
exceeds the capacity (committed time greater than the length of the round). The load trace of
a system consists of load traces of all the disks over sufficient period of time. This requires the
knowledge of the placement of blocks of the requesting stream. This information can be obtained
from the storage volume manager.
A stream is admitted if its demand can be accommodated by the system. It is possible that
the stream cannot be supported in the round the request arrives. The stream scheduling policy will
look for a round in which this stream can be scheduled. We will assume that a stream will wait for
a maximum amount of time, given by latency target, for admittance. Let load[i][j] denote the load
on disk i in round j. Let the demand of a stream be given by demand[j] indicating the number
of blocks to be retrieved by that stream in round j. Then, a stream can be admitted if there
exists a k such that load[i][j round time, for all j, where
storing data for round j, and k is the startup latency - latency target. If multiple disks may store
the data required by a stream in a round, the above check needs to be appropriately modified to
verify that these disks can support the retrieval of needed data. The function serv time() estimates
the worst-case service time required for retrieving a given number of blocks from a disk given the
current load of the disk. This function utilizes the current load of the disk (number of requests
and blocks) and the load of the arriving request to estimate the worst-case time required to serve
the new request along with the already scheduled requests. A similar check can be applied against
buffer resources when needed. The demand trace of the application may include the extra block
accesses needed for metadata.
Given a latency target L and the length of the demand trace d, the admission controller requires
at most Ld additions to determine if a stream can be admitted. In the worst case, for each starting
round, the admission controller finds that the very last block of the stream cannot be scheduled.
On an average, the admission controller requires less computation per stream. If necessary, latency
targets can be reduced to limit the time taken by the admission controller.
The proposed approach allows the load across the disks to be "smoothed" across the different
streams being served by the system. Individual stream smoothing is considered in a number of
studies, for example in [10], to reduce the variations of demand of a single stream. These techniques
typically prefetch blocks ahead of time to optimize desired characteristics (reduce peak rate, reduce
demand variations, etc.) of an individual stream. It is possible to apply individual stream smoothing
techniques in addition to the proposed technique of smoothing demands over different streams. A
recent related study [11] showed that individual stream smoothing didn't offer significant additional
benefit when applied along with the proposed approach.
2.3 Latency and bandwidth guarantees
Latency guarantees are provided by the disk scheduler as explained earlier. When requests arrive
randomly, a burst of requests can possibly disturb the guarantees provided to the periodic requests.
To avoid this possibility, interactive requests are controlled by a leaky-bucket controller which
controls the burstiness by allowing only a certain maximum number of interactive requests served
in a given time window. For example, when interactive request service is limited to, say, 5 per
second, the leaky-bucket controller will ensure that no more than 5 requests are released within a
second to the scheduler irrespective of the request arrival behavior. Hence, an interactive request
can experience delay in the controller as well as at the scheduler for service. If sufficient bandwidth
is allocated for these requests, the waiting time at the controller will be limited to periods when
requests arrive in a burst. Interactive requests are scheduled in a FIFO order to limit the queuing
times of individual requests. We will use maximum response time as a performance measure for
these requests. Sophisticated admission controllers (that take request burstiness into account) [12]
can be employed if it is necessary to limit the waiting times of interactive requests at the admission
controllers.
Aperiodic requests are provided bandwidth guarantees by restricting the periodic and interactive
requests to certain fraction of the available bandwidth. The admission controller for periodic and
interactive requests enforce the bandwidth allocations. Aperiodic requests utilize the remaining
I/O bandwidth. If periodic and interactive requests cannot utilize the allocated bandwidths,
aperiodic requests are allowed to utilize the available bandwidth to improve the response times
for aperiodic requests. Bandwidth guarantees are provided to aperiodic requests by ensuring that
certain minimum number of requests are scheduled every round.
3 Other issues
3.1 scheduling
scheduling deals with the issue of scheduling an arriving VBR stream. By greedily scheduling
a stream as early as possible, we may impact the possibility of scheduling other streams in the
future. If scheduling a stream immediately after its arrival saturates the capacity of a disk, that
disk would be unavailable in that round for any other service and hence can affect schedulability
of other streams. This issue is explored by evaluating a number of scheduling strategies.
All the scheduling algorithms discussed below use a latency target as a parameter. A stream
is said to be unschedulable if it cannot be scheduled within a fixed time interval (specified by the
latency target) after the arrival of the request. If a stream arrives at time t, all the slots within the
time (t, t +L) are considered for scheduling a stream, where L is the latency target. However, the
order in which these slots are considered and the criterion for selection among the choices (if any)
is determined by the stream scheduling policy. In the policies described below, if the starting point
s for scheduling a stream is not t, after reaching t + L, the policy wraps around to t and explores
the options between t and s.
In greedy scheduling, a stream is scheduled as soon as it can be from the time the request arrives
at the system. In random start policy, a stream is scheduled greedily from a random starting point
within the latency target window. In last scheduled policy, a stream is scheduled greedily from the
scheduled point of the last stream. In fixed distance policy, a stream is scheduled greedily from a
fixed time away from the last scheduled stream's scheduled point. In minimal load policy, stream
is scheduled at a point that minimizes the maximum load on any disk in the system. In prime
hopping policy, instead of serially looking at the time slots from a starting point, slots a prime
distance away are considered. For example, if the request arrives at time 0, a random starting
point s is chosen. Then rounds, s, s are considered until the stream can be
scheduled. Since p is prime, all the rounds within the latency target window will be considered.
All the policies except the minimal load policy, in the worst case, require O(Ld) time, where L is
the latency target and d is the length of the demand trace. The minimal load policy, in the worst
case, requires O(Ld +LN) time, where the additional O(LN) time is needed for choosing the slot
that minimizes the maximal load on the N disks.
Latency target impacts stream throughput in two ways. A larger target allows us to search
more slots to find a suitable starting point for trace to be spread out more from each other and
thus allowing a future stream to find enough I/O bandwidth to be scheduled.
scheduling problem can be considered in two ways. In the first, given an existing load
on the system, can an arriving stream be scheduled without disturbing the guarantees of already
scheduled streams? This is the problem we consider in this paper. The scheduling decisions are
made one at a time. Another interesting problem arises in capacity planning [13]: can the system
support the load of a given set of streams? This problem utilizes the information about all the
streams at once to answer the question whether the system can support such a load (with required
guarantees)? It can be shown that the stream scheduling problem is closely related to the bin
packing problem which is known to be NP-hard [14].
3.2 Data layout
Data layout plays a significant role on the performance of disk access. It has been suggested by
many researchers that video data should be striped [15] across the disks for load balancing and to
improve throughput available for a single data stream [16]. Data for a VBR stream can be stored
in (i) Constant Data Length (CDL) units (ii) Constant Time Length (CTL) units [17, 18]. In CDL,
data is distributed in some fixed size units, say 64KB blocks. In CTL, data is distributed in some
constant time units, say 0.5 seconds of display time.
With CDL layout, when data is striped across the disks in the system, data distribution is
straightforward since each disk stores a block in turn. With CDL, data will be retrieved at varying
rates based on the current rate of the stream. When data rate is high, data is requested more often.
The variable rate of data retrieval makes it hard to combine such a policy with round-based seek
optimizing policies. To make it possible to combine CDL layout with such seek optimizing policies,
we consider data retrieval separately from the layout policy. Instead of retrieving one block at a
time, display content for a constant unit of time is requested from the I/O system at once. For
example, if a stream requires 2, and 3 blocks in two consecutive rounds, CTL layout will have these
blocks on two disks, say A and B, A in round 1 and B in round 2. CDL layout will have data
for round 1 on disks A and B and data for round 2 on disks C, D and A if there are 4 disks (A,
B, C and D) in the system. However, in both the data layouts, data required by the application
in a round is retrieved at once at the beginning of the previous round. It is noted that the data
required for display in a unit of time need not be a multiple of the I/O block size. Since the data
retrieval is constrained by the I/O block size of the system, the needed data is rounded up to the
next block. This is termed Block-constrained CTL data layout or BCTL in this paper. In BCTL,
data distribution is harder since the amount of data stored on each disk depends on the data rate in
that round and hence varies from disk to disk. Different data layouts are considered in this paper
to show that the proposed mechanisms can function well in either data layout.
Performance Evaluation
4.1 Simulations
We evaluated a number of the above issues through trace-driven simulations. A system with 8
disks is simulated. Each disk is assumed to have the characteristics of a Seagate Barracuda drive
Table
1. Disk characteristics.
Parameter Value
Zero ms
Avg. ms
Max. ms
Min. Transfer rate 11.5 MB/s
Max. Transfer rate 17.5 MB/s
Ave. latency 4.17 ms
Spindle speed 7200 RPM
Num. cylinders 3711
[19]. The disk drive characteristics are shown in Table 1. Each disk in the system maintains a load
table that depicts the load of that disk into the future. Data block size on the disk is varied form
32KB to 256KB. Data is striped across the eight disks in a round-robin order based on either CDL
or BCTL data layout policy. In simulations, it is assumed that the first block of each movie stream
is stored on a random disk.
Interactive requests and aperiodic requests are modeled by Poisson arrival. Periodic requests are
based on real traces of VBR movies. Periodic request load is varied by requesting more streams to
be scheduled. Interactive requests always ask for 64KB of data and aperiodic requests are uniformly
distributed over (4kB, 128KB). The burstiness of interactive requests is controlled at each disk by
a leaky bucket controller that allowed a maximum of 12 interactive requests per second.
The admission controller for periodic streams employed the strategy discussed in section 2.2.
CDL and BCTL data layout strategies are considered. In BCTL, it is assumed that each stream
pays a latency penalty at a disk. In CDL, a stream pays at most one latency penalty at each disk
per round. For example, if the stream requires 1 block each from disks 1, 2 and 3, then that stream
pays a latency penalty at disks 1, 2 and 3 in that round. If the stream requires 10 blocks in a
round from the 8 disks in the system, it pays a latency penalty at each disk. This is based on the
assumption that the blocks retrieved in a round for a stream are stored contiguously on the disk.
If the stream requests are assumed to arrive randomly over time, more streams can be admitted.
However, to study the worst-case scenario, we assumed that all the requests arrive at once. The
simulator tries to schedule as many streams as possible until a stream cannot be scheduled. The
number of streams scheduled is the stream throughput. Four different video streams are considered
in our study as explained below.
Table
2. Characteristics of traces.
Name Mean Sta. Dev.
KB/sec
Lambs 171.32 58.33
News 484.31 108.86
Asterix 523.79 124.50
The simulations are carried out in two different phases. In the first phase, we only considered
the VBR streams to study the effectiveness of the proposed scheduler for VBR streams. In the
second phase, we considered integrated service of three different types of requests.
4.1.1 Traces
MPEG traces from University of Wuerzburg [20] were used in this study. From the frame by frame
trace of the movie, we constructed several versions of the demand trace for each movie. For this
study, we used four separate MPEG traces. These traces are named Lambs (for a segment of the
movie "Silence of the lambs"), Term (for a movie segment of the movie "Terminator"), News (for
a news segment trace), Asterix (for a segment of Asterix cartoon). Each trace contained 40,000
samples at a frame rate of 24 frames per second (about 27 minutes in duration). A mixed workload
based on these traces is also constructed. During simulations, with equal probability, one out of
the four traces is selected for scheduling i.e., the workload consisted of a random mix of these four
traces with each tracing being selected with an equal probability. Each trace has a different bit
rate and different mean and variance characteristics and these are shown in Table 2.
A block size of 32 KB, 64KB, 128KB or 256 KB and 0.5 seconds of round time are used to
convert the frame trace into a compact demand trace for each movie segment. With the choice of
four block sizes, we get four different compact demand traces for each stream. These four different
traces are used to study the impact of the block size on the results.
4.2 Results
First, we will show the results of serving VBR streams alone in the system. Then, we will present
results of the integrated service.
4.2.1 VBR streams
VBR admission control policy: Fig. 5 shows the impact of block size and data layout strategy
on the stream throughput of the four different data streams and the mixed workload. Similar
performance trends were observed across individual streams and mixed workloads. It is observed
that peak rate based allocation leads to significantly less throughput than the proposed approach.
The peak-rate based scheme utilizes peak demand of the stream over a round for determining
admissibility of that stream. The proposed approach achieves 130% -195% more stream throughput
than the peak rate allocation. This improvement is primarily achieved by exploiting the statistical
multiplexing of different streams. When requests arrive at the same time, flexible starting times
(through latency targets) allow the peaks in demand to be spread over time to improve the
throughput. We consider the mixed workload for further experiments.
As the block size is increased a stream fetches less number of blocks in a round and hence CDL
tends to be more efficient at larger block sizes (due to smaller seek and rotational latency costs).
The stream throughput for CDL improves significantly for all the data streams as the block size is
increased from 32 KB to 256 KB. The stream throughput drops slowly for BCTL as the block size
is increased. This is due to effects of larger quantization of service allocation for a request. The
proposed approach improves the stream throughput compared to a peak-rate based scheme in both
the data layouts.
Fig. 6 shows the disk utilization by the video streams as a function of time. The figure shows
the load at one of the eight disks in the system with a mixed workload. Even though the average
utilization is 66%, the disk is nearly 100% busy for several seconds between rounds 1800 and 2600.
If we allowed the video streams to occasionally utilize the full 100% I/O bandwidth of the system
while maintaining the average utilization below say 65%, the other requests could get starved for
service for long periods of time (in this case for 400 seconds). Hence, this is unacceptable in a
system that has to support multiple types of requests. This result shows the need for bandwidth
allocation among different classes of requests.
Peak-rate
Peak-rate CDL
300 400 500
|||||||||Lambs
Block size (KB)
Number
of
streams
Peak-rate
Peak-rate CDL
300 400 500
|||||||||Term
Block size (KB)
Number
of
streams #
|||||||News
Block size (KB)
Number
of
streams #
|
|
|
100 200 300 400 500
|||||||Asterix
Block size (KB)
Number
of
streams
|
|
|
100 200 300 400 500
|||||||||||Mixed workload
Block size (KB)
Number
of
streams
Figure
5: Impact of data layout and block size on VBR streams.
|||||Round number
Utilization
by
video
streams
Figure
utilization by VBR streams.
Fig. 7 shows the impact of stream scheduling policies on the stream
throughput at various block sizes. The results in Fig. 7 are for a mixed workload and a latency
target of 300 rounds. Greedy policy achieves the least stream throughput in both data layout
schemes. It is observed that the minimal load policy achieves high stream throughput consistently
in BCTL data layout. However, minimal load policy doesn't perform as well with CDL data
layout. Prime hopping, fixed distance and random start achieve nearly the same stream throughput.
Minimal load policy achieves on an average 15% better stream throughput than these three policies
with BCTL layout. It is observed that stream scheduling policy has a significant impact on
performance. For example, minimal load policy improves performance by about 80% compared
to greedy policy at a block size of 32KB in BCTL data layout. This shows the importance of
studying the stream scheduling policies.
Fig. 8 shows the startup latencies achieved by different stream scheduling policies. Greedy
policy, by its nature, achieves the smallest startup latency. However, as observed earlier, it also
results in lower stream throughput. The policies based on randomness, random start, prime hopping
and fixed distance achieve average startup latencies close to 150, which is half of the latency target
of 300 rounds considered for these results. Minimal load and Last scheduled achieve better latencies
than these policies for both the data layouts. More extensive results on stream scheduling can be
found in [21].
Greedy
# Random start
Last scheduled
Fixed distance
# Prime hopping
# Minimal Load
|
100 200 300 400 500
Block size (KB)
throughput
# Greedy
# Random start
Last scheduled
Fixed distance
Prime
hopping# Minimal Load
|
100 200 300 400 500
Block size (KB)
throughput
Figure
7: Impact of stream scheduling.
Statistical guarantees: Fig. 9 shows the impact of statistical guarantees on stream through-
put. Instead of requiring that every block of data be retrieved in time, we allowed a fraction of the
blocks for each stream to miss deadlines or not be provided service. This fraction is varied among
and 5.0% at various latency targets. In our scheme, the admission
controller decides which blocks of a stream get dropped i.e., the blocks to be dropped are determined
at the time of admission. Otherwise, it would be difficult to provide stream isolation i.e., a stream
requesting statistical guarantees can force another stream requesting deterministic service to drop
blocks at the time of retrieval. It is observed that as more blocks are allowed to be dropped, it
is possible to achieve more throughput compared to deterministic guarantees. Stream throughput
can be improved by up to 20% by allowing 5% of the blocks to be dropped. Dropping blocks is
more effective at lower latency targets than at higher latency targets. For example, dropping up
to 2% of the blocks improves the stream throughput by 14.5% at a latency target of 100 rounds
compared to an improvement of 6% at a latency target of 1000 rounds. Stream throughput can
also be improved by relaxing the latency targets.
Fig. 9 also shows the tradeoffs possible between latency targets and the number of blocks allowed
to be dropped. At a latency target of 100 rounds, 152 streams can be provided deterministic service.
To achieve higher stream throughput, we can either increase the latency target or allow blocks to
Greedy
# Random start
Last Scheduled
Fixed distance
# Prime hopping
# Minimal Load
| | | | || |
Block size (KB)
Startup
latency
# Greedy
# Random start
Last Scheduled
Fixed distance
Prime
hopping# Minimal Load
| | | | || | |
Block size (KB)
Startup
latency
Figure
8: Average startup latency.
|||||||||||||||||Statistical allowance (%)
throughput
Latency targets:
|||||||||||||||||Statistical allowance(%)
Avg.
utilization
Latency targets:
Figure
9: Effect of statistical allowances.
be dropped. For example, when we increase the latency target to 1000 rounds, 179 streams could
be scheduled without dropping any blocks. However, to achieve the same throughput at a latency
target of 100 rounds, more than 2% of the blocks have to be dropped. Hence, desired throughput
can be achieved either by allowing larger latency targets or by allowing a fraction of the blocks to
be denied service.
Fig. 9 also shows the impact on the average disk utilization as a function of the statistical
allowances and latency targets. As higher statistical allowances are made, the disk utilizations are
improved as more streams are supported. As latency targets are increased from 100 rounds, average
disk utilizations first increase and then decrease to lower levels. As latency targets are increased, an
arriving stream finds more choices to find a suitable starting spot to utilize the available bandwidth.
However, as the latency targets are increased further, the streams are scheduled farther and farther
into the future and hence result in decreasing disk utilizations. Since we are considering admission of
requests only at time 0, the larger latency targets increase the time window over which utilizations
are being computed and as a result the average disk utilizations decrease. If we continue admitting
new requests (at times other than 0) as earlier requests leave the system, the disk utilizations will
continue improving with increased latency targets.
Usually considered statistical guarantees of 99% (i.e., dropping 1% of blocks) did not provide
significant improvements in stream throughput compared to deterministic guarantees. In our
measurements, the improvements were less than 6% for all the latency targets except 100 which
achieved an improvement of 11%. The primary reason for this is that the proposed technique
achieved significant statistical multiplexing of streams while providing deterministic guarantees.
4.2.2 Integrated service
Fig. 10 shows the average disk utilization when periodic requests are allocated 50% bandwidth,
aperiodic and interactive requests are each allocated 25% bandwidth. The number of periodic
requests streams was maintained at the maximum that the system can support. Aperiodic request
rate is varied while maintaining the interactive request rate at 50 requests/sec (request rates are
measured over the entire system of 8 disks). It is observed that the average utilization of the
periodic streams stays below 50%. Because of variations in demand over time, more periodic
streams could not be admitted. The utilizations of periodic and interactive requests are unaffected
by the aperiodic request rate. It is also observed that as we increase the aperiodic request rate,
aperiodic requests take up more and more bandwidth and eventually utilize more than the allocated
25% bandwidth. When periodic and interactive requests don't make use of the allocated bandwidth,
Interactive
# Periodic
Total
| | | | |||||||Aperiodic arrival rate (/sec)
Utilization
Figure
10: Average disk utilizations across request categories.
aperiodic requests make use of any available bandwidth (25% is the minimum available) and hence
can achieve more than the allocated 25% utilization of the disk. This shows that disks are not left
idle when aperiodic requests are waiting to be served.
Fig. 11 shows the average and maximum response times of aperiodic and interactive requests as
a function of the aperiodic request rate. The number of streams is kept at the maximum allowed by
the system and the interactive arrival rate is kept at 50 requests/sec. Interactive response times are
not considerably affected by the aperiodic arrival rate and the maximum interactive response time
stays relatively independent of the aperiodic arrival rate. It is also observed that the interactive
requests achieve considerably better response times than aperiodic requests (260 ms maximum
interactive response time compared to 1600 ms for aperiodic requests both at 50 reqs/sec). Both
average and maximum response times are better for interactive requests than for aperiodic requests
even at lower aperiodic arrival rates. We observed that the maximum interactive response times
are only dependent on the burstiness of arrival of interactive requests and the bandwidth allocated
to them. Zero percentage of periodic requests missed deadlines as aperiodic request rate is varied.
Fig. 12 shows the response times of aperiodic requests and interactive requests (both at 25
requests/sec) as the number of requested streams in the system is varied from 5 to 100. With the
Aperiodic Avg.
Interactive Avg.
Interactive Max.
| | |||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Aperiodic arrival rate (/sec)
Response
time
(ms)
Figure
11: Impact of aperiodic arrival rate on response times.
considered allocation of bandwidths, the system could support a maximum of 33 streams. Hence,
even when more number of streams are requested, the system admits only 33 streams. This shows
that the periodic request rate is contained to allow aperiodic requests and interactive requests
to achieve their performance goals. We observe that the maximum response times of interactive
requests are not considerably impacted by the number of requested streams in the system.
Fig. 13 shows a comparison of the proposed scheduling algorithm and a variant of SCAN
scheduling algorithm used in most of the current disks [22]. The figure shows the average and
maximum response times of interactive requests as the aperiodic request rate is varied. The
proposed method achieves better average and maximum response times compared to SCAN. As
the aperiodic arrival rate is increased, both the maximum and average response times of interactive
requests get impacted with SCAN scheduling policy. The proposed method isolates the request
categories and maintains the performance of the interactive requests independent of the aperiodic
arrival rate.
Aperiodic Avg.
Interactive Avg.
Interactive Max.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Requested stream rate
Response
time
(ms)
Figure
12: Impact of requested stream rate on response times.
# SCAN Avg.
Proposed Avg.
Proposed max.
| | |||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|Aperiodic arrival rate (/sec)
Response
time
(ms)
Figure
13: Comparison with SCAN.
5 Conclusions and Future work
In this paper, we addressed the problem of providing different performance guarantees in a disk
system. The proposed approach uses admission controllers and an appropriate scheduler to achieve
the desired performance goals. We showed that through proper bandwidth allocation and schedul-
ing, it is possible to design a system such that one type of requests do not impact the performance
of another type of requests. We proposed a scheduling policy that allows seek optimization while
achieving the performance goals.
We also proposed a method for providing deterministic guarantees for VBR streams that
exploited statistical multiplexing of different streams. We showed that the proposed approach
provides 130%-195% more throughput than peak-rate allocation. We also evaluated the impact of
data layout on the performance. We showed that startup latency is an effective tradeoff parameter
for improving stream throughput. For the workloads considered, statistical allowances on top of
the proposed approach did not provide significant improvement in stream throughput.
In the work presented here, we used static bandwidth allocations to achieve performance goals.
We are currently investigating issues in dynamic allocation and adaptive performance guarantees.
We are also studying ways of describing an application load on the disks more concisely than a load
trace.
6
Acknowledgements
Reviewers comments have greatly contributed to the improved presentation of the paper.
--R
I/O issues in a multimedia system.
Grouped sweeping scheduling for dasd-based multimedia storage management
Principles of delay-sensitivie multimedia storage and retrieval
The Fellini multimedia storage server.
Guaranteeing timing constraints for disk accesses in rt-mach
Implementation and evaluation of a multimedia file system.
Designing and implementing high-performance media-on-demand servers
A disk scheduling framework for next generation operating systems.
Enhancements to 4.4 BSD UNIX for efficient networked multimedia in project MARS.
Supporting stored video: reducing rate variability and end-to-end resource requirements through optimal smoothing
Computers and intractability: A guide to the theory of np-completeness
A case for redundant arrays of inexpensive disks (RAID).
Staggered striping in multimedia information systems.
Optimizing the placement of multimedia objects on disk arrays.
Scalable video data placement on parallel disk arrays.
Seagate Corp.
Mpeg trace data sets.
Scheduling algorithms for modern disk drives.
--TR
A case for redundant arrays of inexpensive disks (RAID)
Principles of delay-sensitive multimedia data storage retrieval
I/O issues in a multimedia system
Scheduling algorithms for modern disk drives
Staggered striping in multimedia information systems
Supporting stored video
Cello
Computers and Intractability
Designing and Implementing High-Performance Media-on-Demand Servers
Real-time filesystems. Guaranteeing timing constraints for disk accesses in RT-Mach
Implementation and Evaluation of a Multimedia File System
Enhancements to 4.4 BSD UNIX for Efficient Networked Multimedia in Project MARS
Optimizing the Placement of Multimedia Objects on Disk Array
--CTR
Youjip Won , Y. S. Ryu, Handling sporadic tasks in multimedia file system, Proceedings of the eighth ACM international conference on Multimedia, p.462-464, October 2000, Marina del Rey, California, United States
Ketil Lund , Vera Goebel, Adaptive disk scheduling in a multimedia DBMS, Proceedings of the eleventh ACM international conference on Multimedia, November 02-08, 2003, Berkeley, CA, USA
Javier Fernndez , Jesus Carretero , Felix Garcia , Jose M. Perez , A. Calderon, Enhancing Multimedia Caching Algorithm Performance Through New Interval Definition Strategies, Proceedings of the 36th annual symposium on Simulation, p.175, March 30-April 02,
R. Seelam , Patricia J. Teller, Virtual I/O scheduler: a scheduler of schedulers for performance virtualization, Proceedings of the 3rd international conference on Virtual execution environments, June 13-15, 2007, San Diego, California, USA
A. L. Narasimha Reddy, System support for providing integrated services from networked multimedia storage servers, Proceedings of the ninth ACM international conference on Multimedia, September 30-October 05, 2001, Ottawa, Canada
A. L. N. Reddy , Jim Wyllie , K. B. R. Wijayaratne, Disk scheduling in a multimedia I/O system, ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP), v.1 n.1, p.37-59, February 2005 | seek optimization;disk scheduling;multiple QoS goals;VBR streams |
329351 | Automatic text segmentation and text recognition for video indexing. | Efficient indexing and retrieval of digital videois an important function of video databases. One powerfulindex for retrieval is the text appearing in them. It enablescontent-based browsing. We present our new methods forautomatic segmentation of text in digital videos. The algorithms we propose make use of typical characteristics of textin videos in order to enable and enhance segmentation performance. The unique features of our approach are the tracking of characters and words over their complete duration ofoccurrence in a video and the integration of the multiplebitmaps of a character over time into a single bitmap. Theoutput of the text segmentation step is then directly passedto a standard OCR software package in order to translate thesegmented text into ASCII. Also, a straightforward indexingand retrieval scheme is introduced. It is used in the experiments to demonstrate that the proposed text segmentationalgorithms together with existing text recognition algorithmsare suitable for indexing and retrieval of relevant video sequences in and from a video database. Our experimentalresults are very encouraging and suggest that these algorithms can be used in video retrieval applications as well asto recognize higher level semantics in videos. | INTRODUCTION
There is no doubt that video is an increasingly important
modern information medium. Setting free its complete
potential and usefulness requires efficient content-based
indexing and access. One powerful high-level index for
retrieval is the text contained in videos. This index can be
built by detecting, extracting and recognizing such text. It
enables the user to submit sophisticated queries such as a
listing of all movies featuring John Wayne or produced by
Steven Spielberg. Or it can be used to jump to news stories
about a specific topic, since captions in newscasts often provide
a condensation of the underlying news story. For exam-
ple, one can search for the term "Financial News" to get the
financial news of the day. The index can also be used to
record the broadcast time and date of commercials, helping
the agents checking to see that a client's commercial has
been broadcasted at the arranged time on the arranged television
channel. Many other useful high-level applications
are imaginable once text can be recognized automatically
and reliably in digital video.
In this paper we present our methods for automatic text segmentation
in digital videos. The output is directly passed to
a standard OCR software package in order to translate the
segmented text into ASCII. We also demonstrate that these
two processing steps enable semantic indexing and retrieval.
To ensure better segmentation performance our algorithms
analyze typical characteristics of text in video. Interframe
dependencies of text incidences promise further refinement.
Text features are presented in Section 2, followed by a
description of our segmentation algorithms in Section 3
which are based on the features stated in Section 2. Information
about the text recognition step is given in Section 4.
Then, in Section 5 we introduce a straightforward indexing
and retrieval scheme, which is used in our experiments to
demonstrate the suitability of our algorithms for indexing
and retrieval of video sequences. The experimental results
of each step - segmentation, recognition and retrieval - are
discussed in Section 6. They are investigated independently
for three different film genres: feature films, commercials
and newscasts. Section 7 reviews related work, and Section
8 concludes the paper.
Text may appear anywhere in the video and in different con-
texts. It is sometimes a carrier of important information, at
other times its content is of minor importance and its
appearance is only accidental. Its significance is related to
the nature of its appearance. We discriminate between two
kinds of text: scene text and artificial text. Scene text
appears as a part of and was recorded with the scene,
Automatic Text Segmentation and Text Recognition for
Indexing
Rainer Lienhart and Wolfgang Effelsberg
University of Mannheim
Praktische Informatik IV
68131 Mannheim, Germany
{lienhart, effelsberg}@pi4.informatik.uni-mannheim.de
phone: +49-621-292-3381 fax: +49-621-292-5745
whereas artificial text was produced separately from the
video shooting and is laid over the scene in a post-processing
stage, e.g. by video title machines.
Scene text (e.g. street names or shop names in the scene)
mostly appears accidentally and is seldom intended (An
exception, for instance, are the commercials in the new
James Bond movie). However, when it appears unplanned,
it is often of minor importance and generally not suitable for
indexing and retrieval. Moreover, due to its incidental and
the thus resulting unlimited variety of its appearance, it is
hard to detect, extract and recognize. It seems to be impossible
to identify common features, since the characters can
appear under any slant, tilt, in any lighting and upon straight
or wavy surfaces (e.g. on a T-shirt). Scene text may also be
partially occluded.
In contrast, the appearance of artificial text is carefully
directed. It is often an important carrier of information and
herewith suitable for indexing and retrieval. For instance,
embedded captions in programs represent a highly condensed
form of key information on the content of the video
[23]. There, as in commercials, the product and company
name are often part of the text shown. Here, the product
name is often scene text but used like artificial text. There-
fore, in this paper we concentrate on extraction of artificial
text. Fortunately, its appearance is subjected to many more
constraints than that of scene text since it is made to be read
easily by viewers.
The mainstream of artificial text appearances is characterized
by the following features:
. Characters are in the foreground. They are never partially
occluded.
. Characters are monochrome.
. Characters are rigid. They do not change their shape,
size or orientation from frame to frame.
. Characters have size restrictions. A letter is not as large
as the whole screen, nor are letters smaller than a certain
number of pixels as they would otherwise be illegible
to viewers.
. Character are mostly upright.
. Characters are either stationary or linearly moving.
Moving characters also have a dominant translation
direction: horizontally from right to left or vertically
from bottom to top.
. Characters contrast with their background since artificial
text is designed to be read easily.
. The same characters appear in multiple consecutive
frames.
. Characters appear in clusters at a limited distance
aligned to a horizontal line, since that is the natural
method of writing down words and word groups.
Our text segmentation algorithms are based on these fea-
tures. However, they also take into account that some of
these features are relaxed in practice due to artifacts caused
by the narrow bandwidth of the TV signal or other technical
imperfections.
Prior to presenting our feature-based text segmentation
approach we want to outline clearly what the text segmentation
step should do. This not only affects the algorithms
employed during the text segmentation but also those which
can be used during the text recognition step. Unlike in our
previous work [10][11], where individual characters still
may consist of several regions of different colors after the
text segmentation step, and most related work, the objective
of the text segmentation step here is to produce a binary
image that depicts the text appearing in the video (see Figure
9). Hence, standard OCR software packages can be used
to recognize the segmented text.
Note that all processing steps are performed on color images
in the RGB color space, not on grayscale images.
3.1 Color Segmentation
Most of the character features listed in Section 2 cannot be
applied to raw images; rather, objects must be already avail-
able. In addition, some of the features require that actual
characters be described by exactly one object in order to discriminate
between character and non-character objects.
Therefore, in an initial step, each frame is to be segmented
into suitable objects. The character features monochromatic-
ity and contrast with the local environment qualify as a
grouping and separation criterion for pixels, respectively.
Together with a segmentation procedure which is capable of
extracting monochrome regions that contrast highly to their
environment under significant noise, suitable objects can be
constructed. Such a segmentation procedure preserves the
characters of artificial text occurrences. Its effect on multi-colored
objects and/or objects lacking contrast to their local
environment is insignificant here. Subsequent segmentation
steps are likely to identify the regions of such objects as
non-character regions and thus eliminate them.
As a starting point we over-segment each frame by a simple
yet fast region-growing algorithm [27]. The threshold value
for the color distance is selected by the criterion to preclude
that occurring characters merge with their surroundings.
Hence, the objective of the region-growing is to strictly
avoid any under-segmentation of characters (under normal
conditions).
By and by, then, regions are merged to remove the over-segmentation
of characters while at the same time avoiding
their under-segmentation. The merger process is based on
the idea that the use of standard color segmentation algorithms
such as region-growing [27] or split-and-merge [6] is
improper in highly noisy images such as video frames, since
these algorithms are unable to distinguish isotropic image
structures from image structures with local orientation.
Given a monochrome object in the frame under high additive
noise, these segmentation algorithms would always split
up the object randomly into different regions. It is the objective
of the merger process to detect and merge such random
split-ups of objects.
We identify random split-ups via a frame's edge and orientation
map. If the border between two regions does not coincident
with a roughly perpendicular edge or local orientation
in the close neighborhood, the separation of the regions is
regarded as incidentally due to noise, and they are merged.
The advantage of edges over orientation is that they show
good localization but may not be closed and/or too short to
give a reliable estimate of the angle of the edge. In contrast,
local orientation shows poor localization but determines
precisely the angle of contrast change. Together they allow
to detect most random split-ups of objects.
Edges are localized by means of the Canny edge detector
extended to color images, i.e. the standard Canny edge
detector is applied to each image band. Then, the results are
integrated by vector addition. Edge detection is completed
by non-maximum suppression and contrast enhancement.
Dominant local orientation is determined by the inertia tensor
method as presented in [7].
The color segmentation is completed by merging regions of
similar colors. This segmentation algorithms yields an
excellent segmentation of a video with respect to the artificial
characters. Usually most of them will now consist of
one region.
Note, that the color segmentation employed can be classified
as an inhomogeneous and anisotropic segmentation
algorithm which preserves fine structures such as characters.
More details about this segmentation step can be found in
[9].
3.2 Contrast Segmentation
For our task a video frame can also be segmented properly
by means of the high contrast of the character contours to
their surroundings and by the fact that the strength of the
stroke of a character is considerably less than the maximum
character size.
For each video frame a binary contrast image is derived in
which set pixels mark locations of sufficiently high absolute
local contrast. The absolute local color contrast at position
I(x,y) is measured by
where denote the color metric employed (here City-block
distance), G k,l a Gaussian filter mask, and r the size of the
local neighborhood.
Next, each set pixel is dilated by half the maximum
expected strength of the stroke of a character. As a result, all
character pixels as well as some non-character pixels which
also show high local color contrast are registered in the
binary contrast image (see Figure 2). Likewise for color seg-
mentation, the contrast threshold is selected in such a way
that, under normal conditions, all character-pixels are captured
by the binary contrast image.
Finally, all regions which overlap by less than 80% with the
set pixels in the binary contrast image are discarded.
3.3 Geometry Analysis
Characters are subjected to certain geometric restrictions.
Their height, width, width-to-height ratio and compactness
do not take on any value, but usually fall into specific ranges
of values. If a region's geometric features do not fall into
these ranges of values the region does not meet the requirements
of a character region and is thus discarded.
The precise values of these restrictions depend on the range
of the character sizes selected for segmentation. In our
work, the geometric restrictions have been determined
empirically based on the bold and bold italic versions of the
four TrueType fonts Arial, Courier, Courier New and Times
New Roman at the sizes of 12pt, 24pt, and 36 pt
fonts in total). The measured ranges of width, height, width-
to-height ratio and compactness are listed in Table 1.
Figure
1: Result of the color segmentation
Geometric Restriction Min Max
Contrast abs color
, I x y I
l r
r
r
Figure
2: Contrast Segmentation
Since we have assumed that each character consists of
exactly one region after the color segmentation step, the
empirical values can be used directly to rule out non-character
regions. All regions which do not comply with the measured
geometric restrictions are discarded. Since the 24
fonts analyzed are only a small sample of all possible fonts,
the measured ranges were extended slightly. The following
ranges have been used to describe potential character
regions:
The segmentation result after applying the geometric restrictions
to the sample video is shown in Figure 3.
3.4 Texture Analysis
Text appearances are distinguished by a characteristic spatial
pattern on the word and text line level: In at least one
direction, the direction of writing, periodically strong contrast
fluctuations can be notified. This periodicity is peculiar
to words and text lines.
This particular texture pattern of text was already used in [8]
to separate text columns from images, graphics and other
non-text parts of a document. In this work, however, it is
used to separate text within images. Also, and unlike in [22],
it is not used as the first feature in the text segmentation pro-
cess. This has several advantages. The color segmentation
identifies many large color regions touching the characters.
Thus, by simply eliminating those regions which do not
comply with the geometric features of characters, parts of
the characters' outlines can be cut out precisely. If, however,
text were segmented first by means of its characteristic tex-
ture, we would lose this advantageous feature of the color
segmentation. Especially the large non-character regions
would be reduced to small left-overs which often could no
longer be ruled out by their geometric features.
Against a (more or less) uniform background one can
observe that the direction of the contrast fluctuations
changes rhythmically. In most cases, this regular alternation
of the contrast direction can also be observed in text super-imposed
on an inhomogeneous background since it is then
often surrounded by a type of aura. This aura is usually
added during video production to improve the legibility (see
"Buttern" in Figure 4(a)). Exploiting this feature in a suitable
manner enables edges between non-character regions to
be distinguished from edges between a character and a non-character
region.
In detail, the texture analysis consists of two steps:
1. The extraction of potential words or lines of text and the
estimate of their writing direction.
2. The test whether the potential words and lines of text
exhibit frequent contrast alternations in the estimated
writing direction.
Potential words and lines of text can be determined by
enlargement of the potential character regions of a suitable
size. Due to the great proximity of the characters of a word
and sometimes also between the words in a text line, their
regions are merged into a so-called potential word cluster.
Note that often a small number of non-character regions are
merged into potential word clusters, and sometimes non-character
regions merge into a cluster on their own. The
amount of enlargement necessary depends on the average
running width (i.e. the size of the character spaces), which in
turn is influenced by the character size. In the experiments,
the necessary expansion was determined experimentally to
be 2 pixels.
Next, the writing direction within a potential word cluster is
to be estimated. Unlike existing approaches, we do not
assume that text is aligned horizontally [19][22]. Although
this assumption is quite reasonable in many cases, it restricts
the application domain of the text segmentation algorithm
unnecessarily.
The previous segmentation steps having already separated
the words from large non-character regions, the writing
direction of a word cluster can then be estimated via the
direction of its main axis. This is defined as the angle
compactness
Geometric Restriction Min Max
Table
1:Empirical measured ranges of values using 24
bold TrueType fonts
height 4 90
compactness
Figure
3: Result after the analysis of the regions' geometric
features (246 regions left)
Figure
4: Example of text surrounded by a blue aura to
improve readability (a) and its writing direction
automatically determined (b)
(a) (b)
between the x axis and the axis, around which the word
cluster can be rotated with minimum inertia (see Figure
4(b)). In accordance with [7], this direction is determined by
with the moments
Two cases are to be considered more precisely:
1. The moments of inertia of the main and secondary axis
(designated J 1 and J 2 ) differ only insignificantly (J 1 / J 2
<1.5). This happens only in the case of very short words
such as the word "IN". Since in this case the estimated
direction is likely to deviate greatly from the actual
writing direction (e.g. diagonal for "IN"), clusters with
are never rejected.
2. Errors as to the estimated writing direction may also
appear for short words and J 1 / J 2 >= 1.5 if the word
cluster takes on a crooked form due to an unfavorable
character sequence such as "You". However, the magnitude
of the inaccuracy of the estimated writing direction
keeps within a scope of degree. Thus, it can be
expected that most characters in a word cluster are still
cut by the main axis. Together with the necessary tolerance
in the selection of edges in writing direction in the
subsequent texture analysis step, no special precautions
are necessary.
Once the writing direction of a potential word cluster has
been determined, the texture can be analyzed. Decisive for
the exact parameters of the texture analysis of a cluster is the
required minimal number N min of characters which have to
be in close vicinity to each other in writing direction. In gen-
eral, we demand that at least 2N min edges have to be present
within a range of 2C maxDist . In the experiments the parameters
were set to N
The edges and their directions were
determined by means of the Canny operator, extended to
color images.
Numerous non-character regions, which are difficult to
identify in the preceding steps, can now be found and
removed by means of texture analysis. The result for the
sample video is depicted in Figure 5.
Motion Analysis
Another feature of artificial text occurrences is that they
either appear statically at a fixed position on the screen or
move linearly across the screen. More complicated motion
paths are extremely improbable between the on- and disassembly
of text on the screen. Any other, more complex
motion would make it much harder to track and thus read
the text, and this would contradict the intention of artificial
text occurrences. This feature applies both to individual
characters and whole words. Note, however, that in commercials
this rule is sometimes broken intentionally in order
to carry an unconscious message to the spectators.
It is the objective of motion analysis to identify regions
which cannot be tracked or which do not move linearly, in
order to reject them as non-character regions. Unlike in our
previous system [10][11] and all other related work, the
object here is to track the characters not only over a short
period of time but over the entire duration of their appearance
in the video sequence. This enables us to extract
exactly one bitmap of every text occurring in the video - a
feature which e.g. is needed by our video abstracting system
[12]. Motion analysis can also be used to summarize the
multiple recognition results for each character to improve
the overall recognition performance.
In addition, a secondary objective of motion analysis is that
the output should be suitable for standard OCR software
packages. Essentially this means that a binary image must
be created.
Formation of Character Objects
A central term in motion analysis is the character object C.
It gradually collects from contiguous frames all those
regions which belong to one individual character. Since we
assume that a character consists of exactly one region per
image after the color segmentation step, at most one region
per image can be contained in a character object.
A character object C is described formally by the triple
. A stands for the feature values of the regions
which were assigned to the character object and which are
employed for comparison with other regions, for the
frame number interval of the regions' appearance and v for
the estimated and constant speed of the character in pixels/
frame.
In a first step, each region r i in frame n is compared against
each character object constructed from the
frames 1 to n-1. To this are compared the mean color, size
and position of the region r i and the character objects
. In addition, if a character object consists of
at least two regions, each candidate region r i is checked
tan
Figure
5: Result after texture analysis (242 areas)
A a e
a e
{ }
whether it fits smoothly into a linear motion path of the
character object.
If a region is sufficiently similar to the best-matching character
object, a copy of that character object will be created,
and the region added to the initial character object. We need
to copy the character object before assigning a region to it
since, at most, one region ought to be assigned to each character
object per frame. Due to necessary tolerances in the
matching procedure, however, it is easy to assign the wrong
region to a character object. The falsely assigned region
would block that character object for the correct region. By
means of the copy, however, the correct region can still be
matched to its character object. It is decided at a later stage
in motion analysis, whether the original character object or
one of its copies is to be eliminated.
If a region does not match to any character object existing so
far, a new character object will be created and initialized
with the region.
Also, if a region best fits to a character object that consists
of fewer than three regions, a new character object is created
and initialized with the region. This prevents a possible
starting region of a new character object from being sucked
up by a still shorter and thus unstable character object.
Finally upon the processing of frame n, all character objects
which offend against the features of characters are elimi-
nated. In detail
. all copies of a character object are discarded which
were created in frame n-1 but not continued in frame n
as well as
. all character objects which could not be continued during
the last 6 frames or whose forecasted location lies
outside the frame
and
. whose regions do not fit well to each other, (note
that the requirements are less strict during the
construction of the character object, becoming
more restrictive once a character object is fin-
. which are shorter than 5 frames,
. which consist of fewer than 4 regions or
. whose regions move faster than 9 pixels/frame.
The values of the parameters were determined experimentally
After processing all frames of the video sequence some
character objects will represent a subset of some larger character
objects. This peculiarity results directly from the
design of the formation procedure for character objects.
Whenever a region was added to a character object consisting
of fewer than 3 regions, a new character object (initial-
ized with that region) was created, too. Thus, two character
objects C 1 and C 2 are merged if , ,
and .
Formation of Text Objects
In order to, firstly, eliminate character objects standing
alone which either represent no character or a character of
doubtful importance, and secondly, to group character
objects into words and lines of text, character objects are
merged into so-called text objects. A valid text object
is formed by at least three character
objects which approximately
1. occur in the same frames,
2. show the same (linear) motion,
3. are the same mean color,
4. lie on a straight line and
5. are neighbors.
These grouping conditions result directly from the features
of Roman letters.
We use a fast heuristics to construct text objects: At the
beginning all character objects belong to the set of the character
objects to be considered. Then, combinations of three
character objects are built until they represent a valid text
object. These character objects are moved from the set of the
character objects into the new text object. Next, all character
objects remaining in the set which fit well to the new text
object are moved from the set to the text object. This process
of finding the next valid text object and adding all fitting
character objects is carried out until no more valid text
objects can be formed or until all character objects are
grouped to text objects.
To avoid splintering multi-line horizontal text into vertical
groups, this basic grouping algorithm must be altered
slightly. In a first run, only text objects are constructed
whose characters lie roughly on a horizontal line. The magnitude
of the gradient of the line must be less than 0.25. In a
second run, character groups are allowed to run into any
direction.
During our experiments we noticed that a character is sometimes
described by two valid character objects which register
the character in different but interleaved frames (see
Figure
6). In a further processing step, such character
objects are merged.
a
~
. C
{ }
region
in
frame n
Figure
Merging two interleaved character objects
no
region
in frame
region
in frame
no
region
in frame
region
in frame
no
region
in frame
no
region
in frame
character object x
character object y
no
region in
frame n
region
in frame
no
region
in frame
region
in frame
no
region
in frame
no
region
in frame
region
in frame
Combined CO x (character object y deleted)
region
in
frame n
region
in frame
region
in frame
region
in frame
region
in frame
no
region
in frame
region
in frame
The text objects constructed so far are still incomplete. The
precise temporal range of occurrence of each character
object C i of a text object are likely to differ somewhat. In
addition, some character objects have gaps at frames in
which, for various reasons, no appropriate region was found.
The missing characters are now interpolated. At first, all
character objects are extended to the maximum length over
all character objects of a text object, represented by
. The missing regions
are interpolated in two passes: a forward and a backward
pass. The backward pass is necessary in order to predict the
regions missing at the beginning of a character object. This
procedure is depicted in Figure 7.
Visual Presentation of the Results
Motion analysis delivers detailed information about which
text occurs when and where. In order to enable further processing
in a most flexible way, three different kinds of output
images are created:
1. A binary image per frame showing the extracted characters
at their original location (see Figure 8)
2. A binary image per frame showing each extracted text
object on a new line. The relative positions of the characters
within a text object are preserved (see Figure 9).
3. A binary and color image showing all text objects
extracted from the video sequence (see Figure 10). We
call this representation a text summary.
For text recognition we incorporated the OCR-Software
Development Kit Recognita V3.0 for Windows 95 into our
implementation. Two recognition modules are offered: one
for typed and one for handwritten text. Since most artificial
text occurrences appear in block letters, the OCR module for
typed text was used to translate the rearranged binary text
images into ASCII text. The recognized ASCII text in each
frame was written out into a database file.
Due to the unusually small character size for such software
packages, the recognition performance partially varied con-
siderably, even from frame to frame as illustrated in Figure
11.
Principally, the recognition result can be improved by taking
advantage of the multiple instances of the same text over
consecutive frames, because each character in the text often
appears somewhat altered from frame to frame due to noise,
and changes in background and/or position. Combining their
recognition results into one final character result might
improve the overall recognition performance. However, as
we will see in the next section, it is not needed by our indexing
scheme.
The upcoming question is how to use the text recognition
result to index and retrieve digital videos. A related question
a e i
min a i 1
. a
{ }
{ }
Figure
7: Completion of character objects (CO)
region in
frame
region in
frame
no
region in
frame
region in
frame
no
region in
frame
character object (CO)
no
region
in
region in
frame
region in
frame
no
region in
frame
region in
frame
region in
frame
no
region in
frame
CO after extension
no
region
in
region in
frame
region in
frame
lated
region in
region in
frame
region in
frame
CO after forward interpolation
lated
region in
lated
region in
region in
frame
region in
frame
lated
region in
region in
frame
region in
frame
CO after backward interpolation
lated
region in
Figure
8: Binary image per frame at original location
Figure
9: Rearranged binary image per frame
Figure
10:Text summary
frame n
frame n+1
Figure
11: Bildein- und Textausgabe zweier aufeinanderfolgender Videobilder
with significant impact on the answer to the original question
is what minimal text recognition quality should we
assume/demand?
Numerous different font families in all sizes, and sometimes
even artistic fonts, are used in artificial text in digital videos.
Therefore, OCR errors are very likely. We also have to deal
with many garbage characters which result from non-character
regions that could neither be eliminated by our system
nor by the OCR module. Consequently, our indexing and
retrieval scheme should deal well with a poor recognition
quality.
Indexing
The indexing scheme is quite simple. The recognized characters
for each frame are stored after deletion of all text lines
with fewer than 3 characters. The reason for this deletion is
that, as experience shows, text lines with up to two characters
are produced mainly by non-character objects and, even
if not, consist of semantically weak words such as "a", "by",
"in", "to".
Retrieval
Video sequences are retrieved by specifying a search string.
Two search modes are supported:
. exact substring matching and
. approximate substring matching.
Exact substring matching returns all frames with substrings
in the recognized text that are identical to the search string.
Approximate substring matching tolerates a certain number
of character differences between the search string and the
recognized text. For approximate substring matching we use
the Levenshtein distance L(A,B) between a shorter search
string A and longer text string B. It is defined as the minimum
number of substitutions, deletions and insertions of
characters needed to transform A into a substring of B [20].
For each frame we calculate the minimal Levenshtein dis-
tance. If the minimal distance is below a certain threshold,
the appearance of the string in the frame is assumed. Since it
can be expected that long words are more likely to contain
erroneous characters, the threshold value depends on the
length of the search string A.
For instance, if a user is interested in commercials from
Chrysler, he/she uses "Chrysler" as the search string and
specifies the allowance of up to one erroneous character per
four characters, i.e. the allowance of one edit operation
(character deletion, insertion, or substitution) to convert the
search string "Chrysler" into some substring of recognized
text.
The retrieval user interface of our system is depicted in Figure
12. In the "OCR Query Window" the user formulates
his/her query. The result is presented in the "Query Result
Window" as a series of small pictures. Multiple hits within
one second are grouped into one picture. A single click on a
picture displays the frame in full resolution, while a double
click starts the external video browser.
Figure
12: Retrieval user interface
6 EXPERIMENTAL RESULTS
In this chapter we discuss two things: Firstly, the performance
of our text segmentation and recognition algorithms,
and secondly their suitability for indexing and retrieval.
Since text is used differently in different film parts and/or
film genres, both issues are dealt with separately for three
exemplary video genres:
. feature films (i.e. pre-title sequences, credit titles and
closing sequences with title and credits),
. commercials, and
. newscasts.
Ten video samples for each class have been recorded, adding
up to 22 minutes of video. They were digitized from
several German and international TV broadcasts as 24-Bit
JPEG images at a compression ratio of 1:8, a size of 384 by
288 pixels and at 25 fps. All JPEG images were decoded
into 24-bit RGB images.
6.1 Text Segmentation
Before processing each video sample with our text segmentation
algorithms, we manually wrote down the text appearing
in the samples and the frame number range of its
visibility. Then we processed all ten video samples with our
segmentation algorithms and investigated whether or not a
character had been segmented. To be more precise: we measured
the quality of our segmentation with regard to the
main objective not to discard character pixels. The results
for each video genre are averaged and summarized in Table
2. The segmentation performance is high for title sequences
or credit sequences and newscasts, ranging from 88% to
96%.
It is higher for video samples with moving text and/or moving
background than for video samples where both are sta-
tionary. In the latter case our algorithms cannot profit from
multiple instances of the same text in consecutive frames,
since all instances of the same character have the same
background. Moreover, motion analysis cannot rule out
background regions. Thus, the segmentation performance is
lower. Stationary text in front of a stationary scene can often
be found in commercials. Therefore, segmentation performance
in commercials is lower (66%).
The elimination of non-character pixels is measured by the
reduction factor. It specifies the performance of the segmentation
algorithms with regard to our secondary objective: the
reduction of the number of pixels which have to considered
during the recognition process. The amount of reduction has
a significant impact on the quality of character recognition
and on speed in the successive processing step. The reduction
factor is defined as
It ranges from 0.04 to 0.01, thus demonstrating the good
performance of the text segmentation step. More details are
given in [9].
6.2 Text Recognition
The performance of the text recognition step is evaluated by
two ratio measurements:
. the ratio of the characters recognized correctly to
the total number of characters and
. the ratio of the additional garbage characters to
the total number of characters.
We call the ratios character recognition rate (CRR) and garbage
character rate (GCR), respectively. However, their
exact values have to be determined manually on a tedious
basis, frame by frame. Thus, we approximate their values by
the following formulas, whose values can be calculated
automatically from the manually determined values of text
appearances in the segmentation experiments and the calculated
recognition result.
where
for , W f the set of all words
actually appearing in frame f and t f the text recognized in
frame f.
Note that the garbage character rate (GCR) is only defined
for frames in which text occurs. For frames lacking text we
cannot relate the garbage characters to the total number of
characters. Thus, we just count their number per text-free
frame and call it the garbage character count (GCC).
reduction factor avg# of frames in video
- # of pixels left in frame f
# of pixels in original frame f
f video
title sequences or
credit sequences commercials newscasts
# of frames 2874 579 3147
# of characters 2715 264 80
thereof seg-
mented
Table
2: Segmentation results
CRR avg# of frames with text
f video
contains text
GCR avg# of frames with text
f video
contains text
recognized characters in frame f
# of actual characters in frame f
f video
contains text
GCC avg# of frames without text
# of recognized characters in frame f
f video
- f not contains text
The measurements show that the recognition rate is fairly
low, ranging from 41% to 76% (see Table 3). Also, the garbage
count is quite high for frames without text, especially
for our samples of newscasts and commercials due to their
many stationary scenes with stationary text. This observation
gives us a strong lead for future research: A computationally
cheap detection method for text-free frames has to
be developed that can reduce the GCC considerably.
OCR errors and misses originate from the narrowness of
current OCR package software with respect to our domain.
They are not adjusted to the very small font sizes used in
videos nor to the specific mistakes of the text segmentation
step.
A peculiarity of the Recognita OCR engine can be noticed
when comparing the GCR and GCC values. While non-character
regions are easily discarded by the OCR engine in
frames with text, it has significant difficulties in text-free
frames. Thus, the GCC exploded for commercials and news-
casts, in which most of the frames were text-free in contrast
to the title sequences or credit sequences.
6.3 Retrieval Effectiveness
Retrieval effectiveness is the ability of information retrieval
systems to retrieve only relevant documents. Applied to our
domain, we measure the effectiveness of finding all video
locations depicting a query word while curbing the retrieval
of false locations prompted by recognition errors or garbage
strings generated from non-characters regions which survived
both in the segmentation and recognition steps.
There exist two well-accepted measures for the evaluation
of retrieval effectiveness. These have been adjusted to our
purpose: recall and precision [17]. Recall specifies the ratio
of the number of relevant video locations found to the total
number of relevant video locations in the video database;
precision specifies the ratio of the number of relevant
retrieval results to the total number of returned video loca-
tions. We assume that a video location depicting the search
text is retrieved correctly if at least one frame of the frame
range has been retrieved in which the query text appears.
Table
4 depicts the measured average values for recall and
precision. They are calculated from the measured recall and
precision values, using each word that occurs in the video
samples as a search string. The recall value for approximate
substring matching ranges from 0.54 to 0.82, i.e we get 54%
to 82% of the relevant material, which is quite high. Also
the precision value is considerable. Thus, our proposed text
segmentation and text recognition algorithms can be effectively
used to retrieve relevant video locations. The retrieval
application in Figure 12 gives an example.
6.4 Availability
Code for running the text segmentation algorithms will be
available at publishing time via FTP from the host ftp.infor-
matik.uni-mannheim.de in the directory /pub/MoCA/. In
addition, readers interested in seeing some of the video clips
can retrieve them from http://www.informatik.uni-man-
ognition/.
Numerous reports have been published about indexing and
retrieval of digital video sequences, each concentrating on
different aspects. Some employ manual annotation [5][2],
others compute indices automatically. Automatic video
indexing generally uses indices based on the color, texture,
motion, or shape of objects or whole images [3][18][24].
Sometimes the audio track is analyzed, too, or external
information such as story boards and closed captions is used
[13]. Other systems are restricted to specific domains such
as newscasts [24], football, or soccer [4]. None of them tries
to extract and recognize automatically the text appearing in
digital videos and use it as an index for retrieval.
Existing work on text recognition has focused primarily on
optical recognition of characters in printed and handwritten
documents in answer to the great demand and market for
document readers for office automation systems. These systems
have attained a high degree of maturity [14]. Further
text recognition work can be found in industrial applica-
tions, most of which focus on a very narrow application
field. An example is the automatic recognition of car license
plates [21]. The proposed system works only for characters/
numbers whose background is mainly monochrome and
whose position is restricted.
There exist some proposals regarding text detection in and
text extraction drom complex images and video. In [19],
Smith and Kanade briefly propose a method to detect text in
video frames and cut it out. However, they do not deal with
the preparation of the detected text for standard optical character
recognition software. In particular, they do not try to
determine character outlines or segment the individual char-
acters. They keep the bitmaps containing text as they are.
Human beings have to parse them. They characterize text as
a "horizontal rectangular structure of clustered sharp edges"
[19] and use this feature to identify text segments. Their
approach is completely intra-frame and does not utilize the
video type title sequences or
credit sequences commercials newscasts
CRR 0.76 0.65 0.41
Table
3: Recognition results
exact substring
matching
approx. substring
matching
recall precision recall precision
title sequences or
credit sequences
commercials 0.47 0.73 0.54 0.65
newscasts 0.64 0.95 0.82 0.60
Table
4: Retrieval results.
multiple instances of the same text over successive frames
to enhance segmentation and recognition performance.
Yeo and Liu propose a scheme of caption detection and
extraction based on a generalization of their shot boundary
detection technique for abrupt and gradual transitions to
locally restricted areas in the video [23]. According to them,
the appearance and disappearance of captions are defined as
a localized cut or dissolve. Thus, their approach is inherently
inter-frame. It is also very cheap computationally since
it operates on compressed MPEG videos. However, captions
are only a small subset of text appearances in video. Yeo and
Liu's approach seems to fail when confronted with general
text appearance produced by video title machine, such as
scroll titles, since these text appearances cannot just be classified
by their sudden appearance and disappearance. In
addition, Yeo and Liu do not try to determine the characters'
outline, segment the individual characters and translate
these bitmaps into text.
Zhong et. al. propose a simple method to locate text in complex
images [26]. Their first approach is mainly based on
finding connected monochrome color regions of certain
size, while the second locates text based on its specific spatial
variance. Both approaches are combined into a single
hybrid approach.
et. al. propose a four-step system that automatically
detects text in and extracts it from images such as photographs
[22]. First, text is treated as a distinctive texture.
Potential text locations are found by using 3 second-order
derivatives of Gaussians on three different scales. Second,
vertical strokes coming from horizontally aligned text
regions are extracted. Based on several heuristics, strokes
are grouped into tight rectangular bounding boxes. These
steps are then applied to a pyramid of images generated
from the input images in order to detect text over a wide
range of font sizes. The boxes are then fused at the original
resolution. In a third step, the background is cleaned up and
binarized. In the fourth and final step, the text boxes are
refined by repeating steps 2 and 3 with the text boxes
detected thus far. The final output produces two binary
images for each text box and can be passed by any standard
OCR software. Wu et. al. report a recognition rate of 84%
for images.
Another interesting approach to text recognition in scene
images is that of Ohya, Shio, and Akamatsu [15]. Text in
scene images exists in 3-D space, so it can be rotated, tilted,
slanted, partially hidden, partially shadowed, and it can
appear under uncontrolled illumination. In view of the many
possible degrees of freedom of text characters, Ohya et al.
restricted characters to being almost upright, monochrome
and not connected, in order to facilitate their detection. This
makes the approach of Ohya et al. feasible for our aim,
despite their focus on still images rather than on video. Consequently
they do not utilize the characteristics typical of
text appearing in video. Moreover, we focus on text generated
by video title machines rather than on scene text.
We have presented our new approach to text segmentation
and text recognition in digital video and demonstrated its
suitability for indexing and retrieval. The text segmentation
algorithms operate on uncompressed frames and make use
of intra- and inter-frame features of text appearances in digital
video. The algorithm has been tested on title sequences
of feature films, newscasts and commercials. The performance
of the text segmentation algorithms was always high.
Unlike in our previous work [10][11], where individual
characters still may consist of several regions of different
colors after the text segmentation, the objective of the current
text segmentation was to produce a binary image that
depicted the text appearing in the video. Hence, this enables
standard OCR software packages to be used to recognize the
segmented text. Moreover, the tracking of characters over
the entire duration of their appearance in a video sequence is
a feature unique to our text segmentation algorithms that
distinguishes them from all other related work.
The recognition performance of the OCR-Software Development
Kit Recognita V3.0 for Windows 95 on our text-
segmented video was sufficient for our simple indexing
scheme. We demonstrated the usefulness of the recognition
results for retrieving relevant video scenes.
Many new applications of our text segmentation algorithms
are conceivable. For instance, they can be used to find the
beginning and end of a feature film, since these are framed
by title sequences (pre-title and closing sequence). Or they
can be used to extract its title [12]. In addition, the location
of text appearances can be used to enable fast-forward and
fast-rewind to interesting parts of a video. This particular
feature might be useful when browsing commercials and
sportscasts. Together with automatic text recognition algo-
rithms, the text segmentation algorithms might be used to
find higher semantics in videos.
ACKNOWLEDGEMENTS
We thank Carsten Hund for helping us with the implementation
of the character segmentation algorithms and the experiments
--R
Nearest Neighbor Pattern Classification.
Media Streams: Representing Video for Retrieval and Repurposing.
Query by Image and Video Content: The QBIC System.
Automatic Parsing of TV Soccer Programs.
Picture Segmentation by a Traversal Algorithm.
Digital Image Processing.
Methods Towards Automatic Video Anal- ysis
Automatic Text Recognition in Digital Videos.
Automatic Text Recognition for Video Indexing.
Video Abstracting.
Historical Review of OCR Research and Development.
Recognizing Characters in Scene Images.
Frequently Asked Questions about Colour.
Introduction to Modern Information Retrieval.
A Fully Automated Content-based Image Query System
Video Skimming for Quick Browsing Based on Audio and Image Charac- terization
String Searching Algorithms.
Gray Scale Image Processing Technology Applied to Vehicle License Number Recognition System.
Finding Text in Images.
Visual Content Highlighting via Automatic Extraction of Embedded Captions on MPEG Compressed Video.
Automatic Parsing of News Video.
Parsing
Locating Text in Complex Color Images.
Region Growing: Childhood and Adoles- cence
--TR
A computational approach to edge detection
Text segmentation using Gabor filters for automatic document processing
String searching algorithms
Media streams
Digital image processing (3rd ed.)
Video parsing, retrieval and browsing
Integrated video archive tools
A technical introduction to digital video
Automatic text recognition for video indexing
Finding text in images
abstracting
Practical Handbook on Image Processing for Scientific Applications
Introduction to Modern Information Retrieval
Query by Image and Video Content
Recognizing Characters in Scene Images
Video OCR for Digital News Archive
Automatic Parsing of TV Soccer Programs
--CTR
Yongwei Zhu , Kai Chen , Qibin Sun, Multimodal content-based structure analysis of karaoke music, Proceedings of the 13th annual ACM international conference on Multimedia, November 06-11, 2005, Hilton, Singapore
Qiang Zhu , Mei-Chen Yeh , Kwang-Ting Cheng, Multimodal fusion using learned text concepts for image categorization, Proceedings of the 14th annual ACM international conference on Multimedia, October 23-27, 2006, Santa Barbara, CA, USA
Duminda Wijesekera , Daniel Barbar, Multimedia applications, Handbook of data mining and knowledge discovery, Oxford University Press, Inc., New York, NY, 2002 | OCR;text recognition;video indexing;video content analysis;video processing;character segmentation |
329461 | A code-motion pruning technique for global scheduling. | In the high-level synthesis of ASICs or in the code generation for ASIPs, the presence of conditionals in the behavioral description represents an obstacle to exploit parallelism. Most existing methods use greedy choices in such a way that the search space is limited by the applied heuristics. For example, they might miss opportunities to optimize across basic block boundaries when treating conditional execution. We propose a constructive method which allows generalized code motions. Scheduling and code motion are encoded in the form of a unified resource-constrained optimization problem. In our approach many alternative solutions are constructed and explored by a search algorithm, while optimal solutions are kept in the search space. Our method can cope with issues like speculative execution and code such duplication. Moreover, it can tackle constraints imposed by the advance choice of a controller, such as pipelined-control delay and limited branch capabilities. The underlying timing models support chaining and multicycling. As tasking code motion into account may lead to a larger search space, a code-motion pruning technique is presented. This pruning is proven to keep optimal solutions in the search space for cost functions in terms of schedule lengths. | INTRODUCTION
In the high-level synthesis of an application-specific integrated circuit (ASIC) or in the code generation
for an application-specific instruction set processor (ASIP), four main difficulties have to be faced during
scheduling when conditionals and loops are present in the behavioral description:
a) the NP-completeness of the resource-constrained scheduling problem itself.
1. On leave from INE-UFSC, Brazil. Partially supported by CNPq (Brazil) under fellowship award n. 200283/94-4.
b)the limited parallelism of operations within basic blocks, such that available resources are poorly utilized
c) the possibility of state explosion because the number of control paths may explode in the presence of
conditionals.
d)the limited resource sharing of mutually exclusive operations, due to the late availability of test results.
Most methods apply different heuristics for each subproblem (basic-block scheduling, code motion,
code size reduction) as if they were independent. An heuristic is used to decide the order of the operations
during scheduling (like the many flavors of priority lists), another to decide whether a particular code
motion is worth doing [10][28], yet another for a reduction on the number of states [31]. As a result, these
approaches might miss optimal solutions. We propose a formulation [29] to encode potential solutions for
the interdependent subproblems. The formulation abstracts from the linear-time model and allows us to
concentrate on the order of operations and on the availability of resources. Different priority encodings are
used to induce alternative solutions and many solutions are generated and explored. The basic idea is to
high-quality solutions in the search space when code motion and speculative execution are taken into
account. Since the number of explored solutions can be controlled by the parameters of a search method,
our approach allows a tradeoff between accuracy and search time. In our approach, code motions are in
principle unrestricted, although constrained by the available resources. As code motion typically leads to
a larger search space, we have envisaged a technique to reduce search time.
The main contribution of this paper is a code-motion pruning technique. The technique first captures
the constraints imposed to downward code motion. Then, these constraints are used as a criterion to select
the most efficient code motions. We show experimental evidence that the induced solution space has higher
density of good-quality solutions when our code-motion pruning is applied. As a consequence, for a given
local search method and for a same number of explored solutions, the application of our technique typically
leads to a superior local optimum. Conversely, a smaller number of solutions has to be explored to reach
a given schedule length, what correlates to a reduction of search time.
The paper is organized as follows. In section 2, we formulate the problem and show its representation.
A survey of existing methods to tackle the problem is described in section 3. Our approach is summarized
in section 4 and our support for global scheduling is described in section 5. In section 6, we show how the
constraints imposed to code motion are captured and we explain our code-motion pruning technique. In
section 7, we list the main features of our approach. Experimental results are summarized in section 8. We
conclude the paper in section 9 with some remarks and suggestions for further research. A proof for our
code-motion pruning is presented in Appendix I.
2.1 Motivation
When conditionals are present in the behavioral description, they introduce a basic block (BB) structure.
For instance, the description in figure 1a has four BBs, which are depicted by the shadowed boxes. In the
figure, i1 to i9 represent inputs, o1 and o2 represent outputs and x, y and z are local variables. Operations
are labeled with small letters between brackets and BBs are labeled with capital letters.
if (i5 > i6)
else
description
I
I
(a) (b) (c)0 1k
l
Figure
- The basic block structure
Assume that an adder, a subtracter and a comparator are available. We could think of scheduling each
independently. Nevertheless, such straightforward approach would not be efficient, because the
amount of parallelism inside a BB is limited. For example, in BB I the adder would remain idle during two
cycles, even though operation q in BB L could be scheduled at the same step as either k or l. The example
suggests that we should exploit parallelism across BB boundaries, by allowing operations to move from
one BB to another, which is called code motion. If operation q is allowed to move from BB L into BB I,
a cycle will be saved in BB L. Note that operation q is always executed regardless the result of conditional
c 1 . On the other hand, operations m and n are conditionally executed, depending on the result of conditional
c 1 . We say that operations m and n are control dependent on conditional c 1 . However, operation m is not
data dependent on operation l and they could be scheduled at the same time step. This code motion violates
the control dependence, as m is executed before the evaluation of the conditional. But as soon as the outcome
of c 1 is known, the result of m should be either committed or discarded. This technique is called speculative
execution. If the result of the conditional turns to be true, a cycle will be saved. In the general case,
it may be necessary to insert extra code in order to "clean" the outcome of the moved operation, the so-called
compensation code. For this example, however, no compensation code is needed, as variable z will
be overwritten by operation n. We can consider moving operation q into BB K to be executed in parallel
with operation n. However, as operation q must always be executed and the operations in BB K are only
executed when the result of c 1 is false, a copy of q has to be placed at the end of BB J. As a result, we say
that code duplication takes place. For this example, duplication saves a cycle if the result of c 1 is false.
Impact on different application domains. Control-dominated applications normally require that each
path be optimized as much as possible. Here the role of code motion is obvious. On the other hand, in DSP
applications, it is unnecessary to optimize beyond the given global time constraint [20]. Although highly
optimized code might not be imperative, code motions should not be overlooked even in DSP applications,
because they can reduce the schedule length of the longest path. Consequently, the tighter the constraints
are, the more important the code motions become. In the early phases of a design flow, the optimization
objectives are dictated by the real-time requirements of embedded systems design. The longest possible
execution time of a piece of code must still meet real-time constraints [6]. This fact motivates the formulation
of the problem in terms of schedule lengths (see section 2.2). Moreover, as these early phases tend to
be iterated several times, runtime efficiency is imperative. Often, we need a fast but accurate estimate in
terms of schedule lengths [14]. This reasons motivate the development of a technique to prevent (prune)
inefficient code motions (see section 6). In our approach, the advantage of taking code motions into
account is not bestowed at the expense of a "much larger search space", due to our code-motion pruning.
2.2 Formulation
In order to define our optimization problem we represent both the specification (data and control depen-
dencies) and the solution of our problem in the form of graphs, as defined below.
Definition 1: A control data flow graph E) is a directed graph where the nodes represent
operations and the edges represent the dependencies between them. #
We assume that the CDFG has special nodes to represent conditional constructs. An example is shown
in figure 1b for the description in figure 1a. Circles represent operations. Triangles represent inputs and
outputs. Pentagonal nodes are associated with control-flow decisions. A branch node (B) distributes a
single value to different operations and a merge node (M) selects a single value among different ones.
Branch and merge nodes are controlled by a conditional, whose result is "carried" by a control edge (dashed
edge in figure 1b). A detailed explanation of those symbols and their semantics can be found in [9].
Definition 2: A state machine graph is a directed graph where the nodes represent states
and the edges represent state transitions. #
The SMG can be seen as a "skeleton" for the state transition diagram of the underlying finite state
machine whose formal definition can be found in [7]. To keep track of code motion, we use an auxiliary
graph that is a condensation of the CDFG, derived by using depth-first search, as defined below.
Definition 3: A basic block control flow graph is a directed graph where the nodes represent
basic blocks and the edges represent the flow of control. #
All operations in the CDFG enclosed between a pair of branch and merge nodes controlled by the same
conditional are condensed in the form of a basic block in the BBCG. All branch (merge) nodes in the CDFG
controlled by the same conditional are condensed into a single branch (merge) node in the BBCG domain.
All input (output) nodes are condensed into a single input (output) node. For instance, the BBCG depicted
in figure 1c explicitly shows the BB structure for the description in figure 1a. Circles represent basic blocks
and each BB is associated with a set of operations in the CDFG. In the BBCG domain, a branch node (B)
represents control selection and a merge node (M) represents data selection.
When the CDFG contains conditionals, operations may execute under different conditions. The execution
condition of an operation (or group of operations) is represented as a boolean function, here called a
predicate, whose variables are called guards [26]. A guard g k is a boolean variable associated with the output
of a conditional c k . In the description in figure 1a, conditional c 1 is associated with guard g 1 . As a conse-
quence, operations m and n will execute under predicates g 1 and g 1 ', respectively.
All operations enclosed by a BB have the same execution condition and each path in the BBCG (from
input to output) defines a sequence of BBs. As the values of the guards are data dependent, the taken path
is determined in execution time only. The set of operations enclosed by the BBs on a given path is called
the execution instance (EXI). Each path in the BBCG corresponds to exactly one EXI in the CDFG.
Let us now formulate the resource-constrained problem addressed in this paper:
Optimization problem (OP): Given a number K of functional units and an acyclic CDFG, find a SMG,
in which the dependencies of the CDFG are obeyed and the resource constraints are satisfied for each functional
unit type, such that the function cost # f (L 1 , L 2 , #, L n ) is minimized, where L is the schedule length
of the i th path in the BBCG and f is a monotonically increasing function. #
A solution of the OP is said to be complete only if a valid schedule exists for every possible execution
instance. Since conditional resource sharing is affected by the timely availability of guard values, a solution
is said to be causal when no guard value is used before the time when it is available. A feasible solution
has to satisfy all constraints and must be both causal and complete.
3.1 Previous high-level synthesis approaches
In path-based scheduling (PBS) [2][5] a so-called as-fast-as-possible (AFAP) schedule is found for
each path independently, provided that a fixed order of operations be chosen in advance. Due to the fixed
order and to the fact that scheduling is cast as a clique covering problem on an interval graph, code motions
resulting from speculative execution are not allowed. The original method has been recently extended to
release the fixed order [3], but reordering of operations is performed inside BBs only. Reordering is not
allowed across conditional operations, because this would destruct the notion of interval, which is the very
foundation of the whole PBS technique. Consequently, although reordering improves the handling of more
complex data-flow, the method cannot support speculative execution, which limits the exploitation of parallelism
7with complex control flow [19]. This limitation is released in tree-based scheduling (TBS) [15],
where speculative execution is allowed and the AFAP approach is conserved by keeping all paths on a tree.
However, since the notion of interval is lost, a list scheduler is used to fill states with operations.
Condition vector list scheduling (CVLS) [31] allows code duplication and supports some forms of speculative
execution. Although it was shown in [27] that the underlying mutual exclusion representation is
limited, the approach would possibly remain valid with some extension of the original condition vector
or with some other alternative, such as the representations suggested in [1] and [27].
A hierarchical reduction approach (HRA) is presented in [18]. A CDFG with conditionals is transformed
into an "equivalent" CDFGwithout conditionals, which is scheduled by a conventional scheduling
algorithm. Code duplication is allowed, but speculative execution is not supported. In [28] an approach
is presented where code-motions are exploited. At first, BBs are scheduled using a list scheduler and, sub-
sequently, code motions are allowed. One priority function is used in the BB scheduler and another for code
motion. Code motion is allowed only inside windows containing a few BBs to keep runtime low, but then
iterative improvement is needed to avoid restricting too much the kind of code motions allowed.
Among those methods, only PBS is exact, but it solves a partial problem where speculative execution
is not allowed. TBS and CVLS address BB scheduling and code motion simultaneously, but use classical
list scheduler heuristics. In [28] a different heuristic is applied to each subproblem. All those methods may
exclude optimal solutions from the search space. In [26], an exact symbolic technique is presented. Never-
theless, the use of an exact method in early (more iterative) phases of a design flow is unlikely, especially
because no pruning is presented to cope with the larger search space due to code motion.
3.2 Previous approaches in the compiler arena
In Trace-scheduling (TS) [10] a main path (trace) is chosen to be scheduled first and independently of
other paths, then another trace is chosen and scheduled, and so on. First, resource unconstrained schedules
are produced and then heuristically mapped into the available resources. TS does not allow certain types
of code motion across the main trace. The downside of TS is that main-trace-first heuristics workwell only
in applications whose profiling shows a highly predictable control flow (e.g. in numerical applications).
Percolation Scheduling (PS) [23] defines a set of semantics-preserving transformations which convert
a program into a more parallel one. Each primitive transformation induces a local code motion. PS is an
iterative neighborhood scheduling algorithm in which the atomic transformations (code motions) can be
combined to permit the exploration of a wider neighborhood. Heuristics are used to decide when and where
code motions are worth doing (priorities are assigned to the transformations and their application is
directed first to the "important" part of the code). The most important aspect of PS is that its primitive transformations
are potentially able to expose all the available instruction-level parallelism. Another system
of transformations is presented in [11] and it is based on the notion of regions (control-equivalent BBs).
Operations are moved from one region to another by the application of a series of primitive transforma-
tions. As the original PS is essentially not a resource constrained parallelization technique, it was extended
with heuristic mapping of the idealized schedule into the available resources [22][25]. The drawback of
the heuristic mapping into resources performed in both TS and PS [22] [25] is that some of the greedy code
motions have to be undone [8] [21], since they can not be accommodated within the available resources.
More efficient global resource-constrained parallelization techniques have been reported [8][21][30],
whose key issue is a two-phase scheduling scheme. First, a set of operations available for scheduling is
computed globally and then heuristics are used to select the best one among them. In [8], a global resource-constrained
percolation scheduling (GRC-PS) technique is described. After the best operation is selected,
the actual scheduling takes place through a sequence of PS primitive transformations, which allow the
operation to migrate iteratively from the original to its final position. A global resource-constrained selective
scheduling (GSS) technique is presented in [21]. As opposed to GRC-PS, the global code motion of
the selected operation is performed at once, instead of applying a sequence of local code motions. The
results presented in [21] give some experimental evidence that, although PS and GSS achieves essentially
the same results, GSS leads to smaller parallelization time.
3.3 How our contribution relates to previous work
On the one hand, we keep in our approach some of the major achievements on resource-constrained
scheduling in recent years, as follows.
a) Like most global scheduling methods [8][21][30], our approach also adopts a global computation of
available operations. However, our implementation is different, since it is based on a CDFG, unlike the
above mentioned approaches.
b)We perform global code motions at once, in a way similar to [21], but different from [8] and [11], where
a sequence of primitive transformations is applied.
On the other hand, what distinguishes our approach from related work is the following:
a) Our formulation is different from all the above mentioned methods with respect to the order in which
the operations are processed. To allow proper exploration of alternative solutions, we do not use heuristic-based
selection. Instead, selection is based on a priority encoding, which is determined by an external
(and therefore tunable) search engine (see section 4).
b)Unlike most resource-constrained approaches [8][21][30], we provide support for exploiting downward
code motion (see section 6.1).
c) Our main contribution is a new code-motion pruning technique, which takes the constraints imposed
to code motion into account and prevents inefficient code motions (see section 6.2).
We envisage an approach where no restriction is imposed beforehand neither on the kind of code motion,
nor on the order the operations are selected to be scheduled. In this section, we will introduce our constructive
approach, which is free from such restrictions. An outline of our approach is shown in figure 2. Solutions
are encoded by a permutation # of the operations in the CDFG. A solution explorer creates permutations
and a solution constructor builds a solution for each permutation and evaluates its cost. The explorer
is based on a local search algorithm [24], which selects the solution with lowest cost. While building a solu-
tion, the constructor needs to check many times for conditional resource sharing. This tests are modeled
as boolean queries and are directed to a so-called boolean oracle (the term was coined in [4]) which allows
us to abstract from the way the queries are implemented. A detailed view of the explorer is out of the scope
of this paper. The way in which permutations are generated according to the criteria of a given local search
algorithm can be found in [13]. From now on, we focus on the solution constructor.
constructor
cost
explorer
boolean
oracle
Figure
An outline of the approach
To keep high-quality solutions in the search space, we have designed our constructor such that the following
properties hold. First, neither greedy choices are made nor restrictions are imposed on code motion
(see sections 5.2 and 7.4). Second, pruning is used to discard low-quality solutions, by preventing the generation
of solutions which certainly do not lead to lower cost (see section 6.2). Third, every permutation
generates a complete and causal solution (see section 7.3). Although our method can not ensure that an
optimal solution will always be reached as a consequence of the local-search formulation, at least one optimal
solution is kept in the search space. Proofs for this claim can be found in Appendix I and in [13]. In
general this method allows to trade CPU time against solution quality. In practice the implemented method
succeeds in finding an optimal solution in short CPU times for all the tested benchmarks (see section 8).
5.1 Supporting code motion and speculative execution
We use predicates to model the conditional execution of operations. Based on the timely availability of
guard values, we distinguish between two kinds of predicates. A predicate G is said to be static when it
represents the execution condition of an operation when all the values of its guards are available. A
dynamic predicate # represents the execution condition of an operation when one or more guard values
may not be available at a given time step. Note that the static predicate abstracts from the relative position
in time between the completion of a conditional and the execution of a control-dependent operation. Both
predicates are used to keep track of code motions: # is used to check for conditional resource sharing and
G is used to check whether the result of a speculatively executed operation should be committed or not.
Assume that an operation o moves from BB I to BB J. Let G I and G J be the static predicates of BBs
I and J, respectively. The product I . G J gives the static predicate of operation
In figure 1, for instance, if operation q is duplicated into BBs J and K, the copy of q in BB J executes under
while the copy of q in BB K executes under
As code motion may lead to speculative execution, Gmay not represent the actual execution condition
(because some guard value may not be timely available) and a new predicate # must be computed by dropping
from consideration the guards whose values are not available. Observe that # is necessary to prevent
that speculative execution might lead to non-causal solutions. In figure 1, for instance, the static predicate
of operation m is it moves from BB J into BB I. As operation m is speculatively
executed in BB I, the actual execution condition will be given by the dynamic predicate (inside the
new BB, m is always executed). Note that the dynamic predicate # is the condition under which a result
is produced, while the static predicate G is the condition under which that result is committed. Algorithm
shows how to obtain the dynamic predicate # from the predicate G at a given time step, where end(c k )
stands for the completion time of conditional c k . (Assume for the time being that slot = 0). Functions support
and smooth represent concepts from Boolean Algebra and their definitions can be found in [7].
dynamicPredicate(G, step, slot)
foreach
Algorithm 1 - Evaluation of dynamic predicate
Conditional resource sharing. During the construction of a solution, we need to check if two operations
can share a resource under different execution conditions. Let i and j denote two operations. These operations
can share a resource at a given time step only when the identity # i . # j #0 holds. The boolean oracle
is used to answer this query as well as to compute predicates.
5.2 The scheduler engine
The solution constructor takes a permutation from the explorer and generates a solution. In the construc-
tor, we borrow techniques from the constructive topological-permutation scheduler [13]. A schedule is
constructed from a permutation as follows. The scheduler selects operations to be scheduled one by one.
At any instant, the first ready operation (unscheduled operation whose predecessors are all scheduled) in
the permutation is selected. Each selected operation is scheduled at the earliest time where a free resource
is available. In [13] it is proven that the optimum schedule is always among those created by this principle
of topological-permutation construction.
[a, c, d, e, b, f, g, t]
a a a
c
a
c
a a
f
a
f
a
f
d d d d d d
a t
e d
f
schedule time
steps
scheduling evolution
a
c
f
d
else
[e]
description
(a)
(c) (d)
Figure
Using the topological permutation scheduler
In figure 3, it is shown how a linear-time sequence is constructed by the topological permutation scheduler
for a given #. The behavioral description and its CDFG are shown in figures 3a and 3b. The utilization
of resources is modeled by placing operations in the entries of a resource utilization vector (RUV), as
shown in figure 3a. There is an entry in the RUV for each different functional unit. First, we apply the
scheduler without paying attention to mutual exclusion just to show the principle (see figure 3c). In a
second experiment, we apply mutual exclusion. In that case, operation b can be scheduled at the second
step by sharing an adder with operation c (see figure 3d). We assume that the outcome of t is not available
inside the first step to allow a and b to conditionally share a resource. The resulting schedule length is
reduced to 5 steps for both EXIs. Note, however, that the could be scheduled on its own
in only two steps. Thus, the information about mutual exclusion is clearly not enough and the limitation
is the linear-time model. To allow a more efficient solution, some mechanism has to split the linear-time
sequence by exposing a flow of control. Our mechanism is based on additional information, extracted from
the CDFG, as we explain in the next section.
6 A METHOD TO PRUNE INEFFICIENT CODE MOTIONS
6.1 Capturing and encoding the freedom for code motion
In our method we want to capture the freedom for code motion without restrictions and for this purpose
we introduce the notion of a link. A link connects an operation u in the CDFG with a BB v in the BBCG.
It can be given the following interpretation: operation u can be executed under the predicate which defines
the execution of operations in BB v. Some operation can be linked to several mutually exclusive BBs, as
it may belong to many execution instances. Figure 4 illustrates the link concept.
else
[e]
description
(a)
(b)
A
Figure
- The link concept
Initial links. We will encode the freedom for code motion by using a set of initial links. Given some
operation, its initial link points to the latest BB in a given path where the operation can still be executed.
Initial links are obtained as follows.
First, we look for the so-called terminal operations. A terminal is either a direct predecessor of an output
node or a direct predecessor of a branch or merge node whose result must be available before control selection
or data selection. In figure 4, conditional t is a terminal, because the result of a conditional must
always be available before control selection takes place. Even though operations a and b are direct predecessors
of branch nodes, they are not terminals, as they do not affect control selection. Operations c and
d are terminals, as their results must be available prior to data selection. Operation e is obviously a terminal.
Then, each terminal attached to a branch, merge or output node in the CDFG is linked to the BB which
precedes the corresponding branch, merge or output node in the BBCG. In figure 4, initial links are shown
for terminals c and d (due to data selection), t (due to control selection) and e (due to the output).
Afterwards, we link non terminal operations. Each predecessor of a terminal is linked to the same BB
to which the latter is linked. Operation a will have initial links (not shown in figure) to both BB B and BB
C. Operation b will have a single initial link pointing to BB C.
Those initial links can be interpreted as follows. Conditional t must be executed at latest in BB A,
because its result must be available before control selection (branch). Operation c must be executed at latest
in BB B and operation d at latest in BB C, because their results must be available prior to their selection
(merge).
Note that an initial link encodes the freedom for code motion downwards. This means that each operation
is free to be executed inside any preceding BB on the same path as soon as data precedence and
resource constraints allow (the only control dependency to be satisfied is the need to execute the operation
at the latest inside the BB pointed by the initial link).The underlying idea is to traverse the BBCG in topological
order trying to schedule operations in each visited BB, even if some operation does not originally
belong to it. Observe that, if operation u is given an initial link to BB v and v is reached in the traversal,
then u must be scheduled inside BB v. We say that the assignment of operation u to BB v is compulsory,
or equivalently, that operation u is compulsory in BB v. The notion of compulsory assignment of operations
allows us to identify which control dependencies can be violated. Also, this notion is one of the keys of
our pruning technique, as it will be shown in next subsection.
Final assignments. Each link # will be called here a final operation assignment (from now on called
simply assignment) when the scheduling of the respective operation inside the pointed BB obeys precedence
constraints and does not imply the need for more than the available functional units. Assignments
which might increase registers and/or interconnect usage are included in the search space. Each assignment
of an operation u into a BB v is given the following attributes: a) begin (starting time of operation u inside
(completion time of operation u inside BB v); c) G (static predicate); d) # (dynamic predi-
cate). Note that assignments represent a relative-time encoding. The absolute time is control-dependent
and it is given by the instant when BB v starts executing plus the value in attribute begin.
Handling redundancies. Operations may be redundant for some paths in a behavioral description as
shown in [15]. In our method, such redundancies are eliminated during the generation of initial links.
Operation b in figure 4, for instance, will only be linked to BB C, even though it was originally described
in BB A, as if it belonged to both paths. TBS [15] uses tree optimization to remove redundancies by propagating
each operation to the latest BB where it is to be used. CVLS [31] eliminates them by using extended
condition vectors. Even though they remove redundancies, those methods do not care about encoding the
freedom for code motion properly, because code motion will be determined by a heuristic-based priority
function anyway. For example, in order to cast the control-flow graph into a tree structure, TBS duplicates
up all operations in BBs succeeding merge nodes. As a consequence of this a priori duplication, TBS looses
the information over the freedom for code motion due to data selection. This information could be used,
during the construction of a solution, to avoid inefficient code duplication.
a b t
c d
e
f
else
[e]
description BBCG
(a) (b)
A
A
Figure
Linking unconditional operations
Freedom for code motion. Our initial links do not only eliminate redundancies, but also encode freedom
for code motion. In figure 5, f may be linked to BB A, to both BB C and BB B or to BB D. # i is the initial
link, because the only control dependency to be satisfied is that f must execute before the output is available.
As operation f can be executed in BB D or in any preceding BB as soon as resource and data dependency
constraints are satisfied, unrestricted code motions can be exploited.
6.2 Code-motion pruning
Traversing in topological order. The solution constructor follows the flow of tokens in the CDFGwhile
the BBCG is traversed in topological order. An operation can be assigned to any traversed BB, as soon
as data precedence and resource constraints allow. The first ready operation in the permutation # is
attempted to be scheduled inside the visited BB. Notice that, during the traversal, some operation may be
ready in a given EXI but not in another. In figure 3d, for instance, operation g is ready at the third time step
for but only at the fifth step for EXI e, d, f, g}. For this reason, we say that
an operation is ready under the predicate G of a given BB if the operation is ready and it belongs to a path
which contains that BB.
For a given initial link u # v, the assignment of an operation to a BB being visited is not compulsory
as long as BB v is not reached. If BB v is reached in the traversal, u is scheduled inside BB v and the initial
link will become a final assignment. However, if operation u succeeds to be scheduled inside some ancestor
w of BB v, inducing a code motion, the initial link will be revoked and replaced by a final assignment
Operations are attempted to be scheduled inside each traversed BB according to the criteria below.
Criterion 1 (Code-motion pruning). Let o be an operation with an initial link pointing to BB j. If o is
ready under the predicate G of a visited BB i , with i#j, and the schedule of would require
the allocation of exactly #delay(o)# extra time steps to accommodate its execution, then operation
will not be scheduled inside BB i, preventing the code motion from BB j into BB i. #
Criterion 2 (Constructive scheduling). For a visited BB i with predicate G, the first operation in #
ready under predicate G and not rejected by criterion 1 is scheduled at the earliest time inside BB i where
We claim that the application of criteria 1 and 2 does not discard any better solutions of the optimization
problem defined in section 2 (see proofs in [13] and in Appendix I).
Splitting the linear-time sequence. Note that an operation is allowed to allocate # extra time steps to
accommodate its execution inside a visited BB when its assignment to BB i is compulsory (i = j). This will
make space for the scheduling of non-compulsory operations in idle resources. When no ready operation
in # satisfies criterion 1, the constructor stops scheduling BB i and another BB is visited. Observe that the
resulting global schedule is not a linear-time sequence. Instead, the sequence is split each time the traversal
crosses a branch and the flow of control is kept exposed. Criterion 1 is responsible for splitting the linear-time
sequence, as it decides when stop scheduling a BB prior to control or data selection. Note that this
decision is based on constraints (resource constraints and control and data dependencies). A counter-example
is the heuristic criterion used by TBS, where the linear-time sequence is split each time a conditional
turns to be the operation with higher priority in the ready queue [15].
Example. In figure 6 the same example used in figure 3 is scheduled to illustrate the method. First, we
show in figure 6b how each EXI would be scheduled independently, just by applying the topological permutation
scheduler. Note that EXI e, d, f, g} is scheduled in five steps and that EXI
g} is scheduled in 2 steps. Yet, it is not possible to overlap those sequences, because a and b cannot share
the adder (# a . # b #0, as the outcome of conditional t is not available inside the first step). Such tentative
solution would be non-causal and, as a consequence, infeasible. Even though each path can be AFAP
scheduled for the given #, there is a conflict between them so that if one sequence is chosen, the other will
be imposed an extra step. Now we will show how our constructor generates a feasible solution. In figure
6c the initial links are depicted, while in figures 6d to 6k the evolution of the construction process is shown
for each operation in #. Circles in bold mark the current BB being traversed. Notice in figure 6d that, even
though other ready operations (a, e and b) precede t in #, t is the scheduled one because it is the only operation
not rejected by criterion 1 (it is compulsory in the current BB). Then a is scheduled (figure 6e) in the
same step, as an idle adder exists. At that point, no other ready operations can be scheduled in that BB, as
they would require the allocation of extra steps (criterion 1). Then, another BB is taken (figure 6f) and so
on. Figure 6k shows the final result. It is the same as obtained by scheduling EXI 1 independently (figure
6b), but EXI 2 needs an extra step. Note that if a and b were exchanged in #, the solution in figure 5b for
would be obtained, while EXI 1 would need an extra step. When a conflict happens between paths,
the method solves it in a certain way induced by #, but there exists another permutation # which induces
another solution where the conflict is solved in the opposite way (no limitation in the search space).
Observe that the assignment of operations a or b to the first step represents speculative execution. If we
do not allow speculative execution both EXIs will need an extra step, resulting in schedule lengths of 3 and
6.
[a, c, d, e, b, f, g, t]
a t
a
a
a
a
d
a
b d
f
a
b d
f
a
c
d
e
f
a a a
c
a
e
c
d
a
e
c
d
f
a
e
c
d
a
e
c
d
f
(b) (c)
(d) (e) (f) (g)
c
d
c
c c
c c
a
c
(a)
f
d
Figure
Splitting the linear-time sequence
Notion of order dominant over notion of time step. As opposed to other approaches [31][18], our method
does not use time as a primary issue to decide on the position of an operation. Instead, a notion of order
and availability of resources is used. As assignments incorporate a relative-time encoding, time is only
used to manage resource utilization inside BBs.
while C#
while j #|
if (unscheduled(#scheduledpreds(#));
then #):=asap(#);
if
then annotate(#));
Algorithm 2 - The solution constructor
The solution constructor is summarized in algorithm 2. # is a permutation, C is the set of BBs, u is an
operation, v is a BB, # is an assignment and #) is the starting time of operation u inside BB v. Function
returns BBs in a arbitrary topological order. A candidate assignment# is created for each pair
(u,v) and the condition unscheduled(# scheduledpreds(#) is evaluated. If this condition holds,
the earliest step #) in BB v with a free resource will be found. Function isSuitable(#) decides
whether the candidate assignment # should be committed or revoked, by checking criterion 1. When all
compulsory operations are scheduled and there is no room for scheduling others, a new BB is taken. Function
solveCodeMotion(#) inserts compensation code when duplication succeeds.
Runtime complexity. Let n be the number of operations in #, b the number of BBs, p the number of
paths and c the number of conditionals. When ready operations are kept in a heap data structure, the search
for the first ready operation in # takes O(log n). As this search may be repeated for each operation and
for each BB, the worst case complexity of algorithm 2 is O(b n log n). The runtime efficiency of our
approach does not depend on p (which can grow exponentially in c), as opposed to path-based methods.
Wewill illustrate here why the application of code-motion pruning does not discard any better solution.
Our goal is to provide an outline for the proof in Appendix I. From an original solution S m induced by a
permutation #, we will try to construct a better solution S n for the same #.
In figures 7 and 8 operations a 1 to a 4 are additions, s 1 to s 3 are subtractions and m 1 is a multiplication
(assume one resource of each type). A grey entry in the utilization vector (RUV) means either that a
resource was occupied by some other operation or was not occupied due to a data dependency. (These grey
fields are used here to abstract from other operations so that we can concentrate on a certain scope).
In figures 7a and 8a we show different solutions which were generated by our constructor. This means
that S m was constructed following criterion 1. For example, operation m 1 was not scheduled inside BB
must have prevented it from being scheduled in P. Note that the empty fields in S m
mean that other operations could not be scheduled in the idle resources due to data dependencies.
Out of each solution S m , wewill construct a new solution S n , as shown in figures 7b and 8b, by allowing
m 1 to boost into BB Pwhere it allocates exactly steps. This will make room for operations from other
BBs to move up. Wewill consider two different scenarios for code motion and we will show that if m 1 was
allowed to boost into BB P, no better result would be reached in terms of schedule lengths.
a 4
a 1
a 2
a 3
(a) (b)
R
a 1
a 2
a 3
a 4
R
#, a4, a1, a2, a3, #]
Figure
First scenario
In figure 7, we assume that operation a 4 can move up from BB R into the allocated steps. Notice that,
even though a4 precedes a1 and a2 in the permutation, a4 could not be scheduled at the same time step
either with s1 or with s2 (figure 7b), because this was not possible in the original solution S m , which means
that the scheduler engine must have detected data dependencies between them. Moreover, a 3 could not
be moved into the # steps allocated in BB P . As a result, the number of steps freed
by the number of allocated steps (#) and no path is shortened. Note that in figure 7, even though we
have optimistically assumed that the boosted operations have completely freed the steps from which they
have moved, no better solution could be reached.
In figure 8 we illustrate a case where path P # Q # R is shortened. This was possible because operation
a4 has moved to the first step of BB P and has filled an entry not occupied by the other operations
(figure 8b). Notice that s1 was scheduled in the same time step as a4. However, this was not possible in
indicated by the empty "+" field at the first step of BB Q in figure 8a, which means that the scheduler
has detected a data dependency between them. As a data dependency was violated by the code motion, solution
S n is infeasible.
a 2
a 4
a 1
(a) (b)
R
a 1
a 2
a 4
R
#, a4, a1, a2, a3, #]
Figure
These examples suggest that for a given permutation, it is not possible to obtain a feasible solution with
shorter paths than those in the solution generated by our solution constructor. The feasible solutions which
could be obtained are at best as good as the constructed one. The underlying idea illustrated here is that,
instead of allowing any arbitrary code motions generated by the topological-permutation scheduler
engine, only solutions where criterion 1 is obeyed are constructed. This leads to the notion of code-motion
pruning. Since the application of criterion 1 does not prune any better solutions (Appendix I) and topologi-
2cal-permutation construction guarantees that at least one permutation returns the optimal schedule length
[13], we conclude that this code-motion pruning keeps at least one optimal solution in the search space.
This section summarizes the main features of our approach and it is organized as follows. The first two
subsections show howwe support constraints imposed by the advance choice of a controller. The third sub-section
explains how our method generates only complete and causal solutions. The last subsection
describes the types of code motions supported in our approach.
7.1 Supporting pipelined-control delay
It has been shown [17] that most approaches found in the literature assume a fixed controller architecture
and that they would produce infeasible solutions under different controller architectures. One of such
constraints is the limited branch capability of some controllers (see next subsection). Another is imposed
by the pipelining of the controller.
When pipeline registers are used to reduce the critical path through the controller and the data path [16],
there is a delay between the time step where the conditional is executed and the time step where its guard
value is allowed to influence the data path. It is as if the guard value were not available within a time slot
after the completion of the respective conditional. Figure 9a illustrates the effect of pipelined-control
delay. Assume a single adder, a pipelined-control delay of 2 cycles and that the value of guard g 1 is available
executed early enough and it is not shown in the figure).
a
c
delay slot # a # b # c # g 1
(a) (b)
Figure
9 - Effect of pipelined-control delay
Algorithm 1 considers the effect of the delay slot. We illustrate its application in figure 9b, where static
and dynamic predicates are shown. Only operations d and e can conditionally share the adder, as
also that operations in grey are speculatively executed with respect to conditional c 2 .
This example emphasizes the importance of speculative execution to fill the time slots introduced by the
pipeline latency.
7.2 Supporting limited branch capability
When simple controllers are used (e.g. for the sake of retargetable microcode), state transitions in the
underlying FSM are limited by the branch capability of the chosen controller.
Figure
illustrates the problem. In figure 10a we show the static predicates for the operations in the
example. Two different schedules are presented in figures 10b and 10c (1 comparator, 1 "white" resource
and 1 "grey" resource are available).
G a
#l
(b) (c)
a b a b
(a)
Figure
Effect of limited branch capability
The schedule in figure 10b implicitly assumes a 4-way branch capability, as shown by the state machine
graph. If we delay the execution of conditional c 2 by one cycle, we will obtain the schedule in figure 10c,
which requires only 2-way branch capability and where n 1 and n 2 represent the duplication of operation
n. Observe that operation n can share the "white" resource with operation l only when the path
taken. Instead, when path must be delayed one cycle, which makes the overall schedule
length in figure 10c longer than in figure 10b.
As suggested by the example, our method can handle limited branch capabilities in building solutions.
If the controller limits the branch capability to a value k, where k=2 n , the constructor will allow at most
n conditionals at the same time step. This is similar to the technique presented in [16].
7.3 Generating complete and causal solutions
In this subsection we first show how other methods deal with causality and completeness and we illustrate
next how our method generates causal and complete solutions only.
In path-based approaches [5], completeness is guaranteed by finding a schedule for every path and overlapping
them into a single-representative solution. For a given operation o which is control-dependent on
conditional causality is guaranteed by preventing operation o from being executed prior to c k . However,
this leads to a limitation of the model, as speculative execution is not allowed.
The symbolic method in [26] has to accommodate the overhead of a trace validation algorithm, required
to ensure both completeness and causality. Scheduled traces are selected such that they can co-exist, without
any conflict, in a same executable solution.
In our method completeness is guaranteed by the traversal of BBs, because all predicates G associated
with the BBs are investigated, which makes sure that all possible execution conditions are covered, without
the need to enumerate paths. Causality is guaranteed by the usage of dynamic predicates in checking conditional
resource sharing. This test is performed during (and not after) the construction of a solution, each
time the first ready operation in # attempts to use an already occupied resource. To illustrate this fact, we
will revisit an example from [26].
A potential solution generated by the symbolic technique in [26] is shown in figure 11b for the CDFG
in 11a. A resource of each type (black, grey and white) is assumed. Labels
the duplication of operation m (n). The solution is complete, because each EXI is scheduled such that
resource and precedence constraints are satisfied. However, the solution is not causal, because it can not
be decided whether k or m 1 , a or n 2 will be executed (the value of g 1 is not available at the first step).
In figures 11c and 11d, our method is used to construct two solutions from two different permutations.
Operations k and m in figure 11c as well as operations a and n in figure 11d are prevented to be scheduled
at the same step by checking conditional resource sharing. Observe that G k . Gm1 # 0, but # k . #m1 #0;
and also that G a . G
(c) (d)
a
l
a
l
G a # g 1
a
l
a m
l m
Figure
Causality by construction
In our approach, we do not have to perform a trace validation procedure a posteriori, like in [26],
because a test with similar effect is done incrementally, during the construction of each solution, as
described below.
After scheduling one operation, our method evaluates the dynamic predicates and updates conditional
resource sharing information, before a new operation is processed. If this new operation is detected to have
a conflict, in some trace, with a previously scheduled operation, the scheduling of the new operation is
postponed until a later time step, namely the earliest time step wherein the conflict does not occur anymore.
Our dynamic evaluation of predicates, combined with the constructive nature of schedules in a per-operation
basis, has the advantage of preventing the construction of non-causal solutions (like in figure 11b).
This avoids the enumeration and backtracking of infeasible solutions.
7.4 Exploiting generalized code motions
In this subsection we summarize the relationship between initial links, final assignments and code
motions. A detailed analysis of code motions can be found in [28].
A
A
a
a
a
a
a
(a) (b)
A
A
a
a
a
a
a
(c) (d)
duplication-up boosting-up unification useful
Figure
motions
Basic code motions. Figure 12 illustrates code motions in the scope of a single conditional. # f represents
a final assignment and # an initial link. A circle in bold represents the current BB being traversed. In figure
12a, operation a which was initially linked to BB D, is assigned to BB B via #
f . This motion requires code
compensation, which is performed by inserting assignment # f
#. As a result, code duplication takes place.
In figure 12b, the operation is moved across the branch node. This is called boosting-up and may lead to
speculative execution. In figure 12c, operation a which was initially linked to different mutually exclusive
BBs succeeds to be scheduled in BB A, leading to a unification of initial links # and #
# into the final
assignment # . Finally in figure 12d, operation a is moved between BBs with the same execution condition.
This is called a useful code motion. Even though only upward motions are explicitly shown, downward
motions are implicitly supported in our method, as the initial links encode the maximal freedom for code
motions downwards.
Generalized code motions. Figure 13 shows generalized code motions supported in our approach.
Arrows indicate possible upward motions from an origin BB to a destination BB. Gray circles illustrate
more local code motions, which are handled by most methods. Either they correspond to the basic code
motions of figure 12 or to a few combinations of them. In [28] these combinations are attempted via iterative
improvement inside windows containing a few BBs. Black circles illustrate more global code motions
also supported in our method. Note that such "compound" motions are determined at once by the permutation
and are not the result of successive application of basic code motions, as opposed to PS [23]. We do
not search for the best code motions inside a solution, we do search for the best solution whose underlying
code motions induce the best overall cost. As any assignment determined by a permutation may induce a
code motion, unrestricted types of code motions are possible. As a result, the search space is not limited
by any restriction on the nature, amount or scope of code motions.
Figure
Generalized code motions
Nevertheless, the fact that generalized code motions are allowed is not sufficient to guarantee the generation
of high-quality solutions. Constraints should be exploited in order to avoid the generation of inferior
solutions. This is performed by the pruning technique described in section 6.2, whose impact is shown
by the experiments reported in the next subsection.
The method has been implemented in the NEAT System [12]. We have been using the BDD package
developed by Geert Janssen as a boolean oracle. In the current implementation a genetic algorithm is used
in the explorer.
Table
example
waka
(a) (b) (c)
ours 4,4,7 3,4,7 3,4,6
In table 1, our method is compared with others for the example in [31]. Resource constraints and the
adopted chaining are shown at the top of the table. Our results are given in the shadowed row in terms of
the schedule length of each path. Our solution for case a is as good as TBS and HRA [18]. In case b our
method, TBS and HRA reach the same results which are better than PBS. For case c, both our method and
TBS are better than HRA and PBS.
For the experiments summarized in the following two tables, search was performed for several randomly
chosen seeds (used to generate random populations in the explorer) and "CPU" means the average
search time in seconds, using an HP9000/735 workstation.
Table
Benchmarks without controller constraints
(a) (b) (a) (b)
ours
In table 2 we compare our results with heuristic methods (TBS, CVLS, HRA) and one exact method
(ST). At the top of the table we show the resource constraints for each benchmark. Our results are shown
in the shadowed row and results from other methods are assembled at the bottom. For each result, the schedule
length of the longest path is shown and the average schedule length (assuming equal branch probabili-
ties) is indicated between parenthesis. Note that our method can reach the best published results. A better
average schedule length (2.00) was found for benchmark parker(b). It should be noticed that the exact
method presented in [26] can only guarantee optimality with respect to the schedule length of the longest
path, but not for a function of schedule lengths of other paths. This is a first indication that our method can
high-quality solutions in the search space for a broader class of cost functions.
In table 3 we show our results for benchmarks under the pipelined-control delay constraint. Our
approach can reach the same schedule lengths obtained by the exact method described in [26]. As we are
using currently a local-search algorithm in our explorer, we can not guarantee optimality. In spite of that,
optimal solutions were reached for all cases within competitive CPU times. Although we certainly need
to perform more experiments, these first results are encouraging. They seem to confirm that our method
is able to find the code motions which induce the better solutions.
Table
Benchmarks with pipelined-control delay
rotor [26]
(a) (b) (c) (d) (e) (f) (g) (h)
latency
alu: 1-cycle ALU; mult: 2-cycle pipel. multiplier
speculative execution allowed
We have also performed an experiment to evaluate the impact of code-motion pruning on the search
space. In order to compare a sample of the search space with and without pruning, fifty permutations were
generated randomly and the respective solutions were constructed. In a first comparison, we counted the
number of solutions, by distinguishing them only based on their overall cost value. Figure 14 shows results
with (black) and without pruning (gray). The height of a bar represents the number of solutions counted
for different cost values. In waka(1) and maha(1) we have used cost used
in waka(2) and maha(2). For example, in waka(2), 13 different solutions are identified without pruning,
but only 4 with pruning. This reduction is explained by the fact that, when pruning is applied, more permutations
are mapped into a same cost. In another words, the density of solutions with same cost increases
with code-motion pruning.
Figure
Reduction of the number of solutions
A second comparison was performed by looking at the maximal and minimal cost values. It was
observed that the maximal cost values are closer to the minimum when pruning is applied. The difference
between the maximal and the minimal cost values is here called cost range. The compaction on the cost
ranges is shown in figure 15, normalized with respect to the "no pruning" case. The cost range ratio "prun-
ing/no pruning" is 0.4 for waka(1) and maha(1), 0.29 for waka(2), and 0.65 for maha(2).
Figure
- Compaction on cost range
As the density of solutions with same cost increases (figure 14) and, simultaneously, the maximal cost
is closer to the minimum (figure 15), we conclude that the density of high-quality solutions increase with
code-motion pruning. This fact suggests a higher probability of reaching (near) optimal solutions during
exploration, whichever the choice of a local-search algorithm might be.
Finally, we have performed experiments for a larger benchmark than those presented in previous tables.
Benchmark s2r, whose CDFG has about 100 nodes, is borrowed from [26]. It will be used under different
sets of resource constraints, as depicted by cases (a) to (h) in the first rows of table 4. To compare search
spaces with and without pruning, 500 permutations were generated randomly and the respective solutions
were constructed. The procedure was repeated for several randomly chosen seeds such that more accurate
average values could be evaluated. The experiments were performed first without code-motion pruning
and then enabling it. The same sequence of permutations were used to induce solutions in both cases.
Figure
- Compaction on cost range for benchmark s2r
First, we have measured the cost ranges for each set of resource constraints, which are summarized in
figure 16, normalized with respect to the "no pruning" case. Note that cost ranges are reduced to at least
70% when pruning is enabled.
At second, we counted the number of solutions which have hit the optimum latency and we evaluated
the average percentage with respect to the total number of solutions. This percentage represents the average
density of optimal solutions in the search space and it is presented in the shadowed rows of table 4, where
"nopru" and "pru" stand for "no pruning" and "pruning", respectively. Again, the results show that code-
motion pruning increases the density of high-quality solutions in the search space. They also show that
the tighter the resource constraints are, the more impact has code-motion pruning. This results can be interpreted
as follows. When resources are scarce, such as in cases (a), (b), (e) and (f), only a small fraction of
the potential parallelism can be accommodated within the available resources and, as a consequence, more
code motions are pruned, since they would be inefficient. On the other hand, if resources are abundant, like
in cases (c), (d), (g) and (h), most of the potential parallelism is accommodated by the available resources
and there is no need to prune most code motions, as they contribute in the construction of high-quality solutions
Table
- The impact of code-motion pruning on the density of optima
s2r [26]
(a) (b) (c) (d) (e) (f) (g) (h)
resources
latency
density
nopru 1% 1% 41% 45% 1% 2% 31% 34%
density pru 41% 16% 62% 62% 25% 16% 45% 45%
CPU
nopru 26 17 0.9 0.9 24 17 17 1.4
CPU
alu: 1-cycle ALU; mult: 2-cycle pipel. multiplier
single-port look-up table
Since in the synthesis of ASICs or in the code generation for ASIPs we want to use as few resources
as possible, we are likely to observe, in practice, a huge unbalance between the potential parallelism and
the exploitable parallelism, which is constrained by the available resources. This fact justifies the use of
our code-motion pruning technique.
In order to compare the impact of the different densities of optima on search time, we also measured
the average time to reach a given optimum. Here we have emulated a kind of random search for the optimal
latencies shown in table 4. The average search times are reported under entry "CPU". Results show that
code-motion pruning leads to a substantial improvement on search time under tight resource constraints.
9 CONCLUSIONS AND FUTURE WORK
This paper shows how scheduling and code motion can be treated as a unified problem, while optimal
solutions are kept in the search space. The problem was approached from the point of view of an optimization
process where the construction of a solution is independent from the exploration of alternatives. It was
shown how a permutation can be used to induce unrestricted code motions.
Once the presence of optimal solutions in the search space was guaranteed by a better control over the
construction procedure, we have shown that a pruning technique can ease the optimization process by
exploiting constraints. Since control dependencies represent obstacles to exploit parallelism, we might be
induced to violate them as often as possible. However, we conclude that control dependencies should also
be exploited, in combination with resource constraints and data dependencies, in order to detect and prevent
inefficient code motions. Our way to cast the problem and the experimental results allow us to conclude
that such a code-motion pruning increases the density of high-quality solutions in the search space.
Even though this paper focuses on a problem related to the early phases of a design flow, our constructive
method can accommodate some extensions. As many alternative solutions are generated, it is possible
to extend the constructor to keep track of other issues like register and interconnect usage, and the number
of states. Those issues could then be captured in the cost function. This is convenient especially in the late
phases of a design flow, where optimization has to take several design issues into account.
As future work, we intend to cast loop pipelining into our constructive approach. Loops could be easily
supported in our method with simple extensions, as they can be modeled with conditionals. Loops could
then be "broken" during scheduling and the back edges could be restored later into the state machine graph,
like in [5] and [15]. However, such extensions would not allow exploitation of parallelism across different
iteration of a loop. For this reason, we prefer to investigate loops as a further topic.
I. Proof for the pruning technique
Theorem:LetS m be a solution of the optimization problem described in section 2.2. Assume that S m was constructed
with algorithm 2 for a given # and let o be an operation assigned to BB j in S m . Let # be #delay(o)#. If a solution
S n is obtained by moving o from BB j into BB i and it allocates exactly # extra cycle steps to accommodate the
execution of
Proof: Let l(K), L(K)# be the schedule lengths of a BB K before and after the motion, respectively. Let P, Q,
R, S be BBs forming path respectively, the schedule lengths
of path p n before and after motion.
a) was assigned to Q and allocates # steps inside P # L(P)=l(P)+#
assigned to R can be moved into the allocated steps
a.2) There is an operation u assigned to R which can be moved into the allocated steps. As u depends or has resource
conflicts with operations assigned to Q and S (topological permutation construction)
was assigned to R and allocates # steps inside both Q and S
c) was assigned to R and allocates # steps inside P #
As does not depend but has resource conflicts with operations assigned to Q and S (topological permutation
For a given #, solution S n has path lengths greater than or equal to those in S m . As any generalized code motions
can be seen as "compound" code motions built out of these basic code motions, and cost is monotonically increasing,
we can conclude without loss of generality that cost(S n
--R
"Functional Synthesis of Digital Systems with TASS,"
"Area and Performance Optimizations in Path Based Scheduling,"
"Control-Flow Versus Data-Flow-Based Scheduling: Combining Both Approaches in an Adaptive Scheduling System,"
"Efficient Orthonormality Testing for Synthesis with Pass-Transistor Selectors,"
"Path-based scheduling for synthesis,"
"Embedded System Design, "
Synthesis and Optimization of Digital Circuits
"A global resource-constrained parallelization technique,"
"A Data Flow Exchange Standard,"
"Trace Scheduling: A technique for global microcode compaction,"
"Region Scheduling: AnApproach for Detecting and Redistributing Parallelism,"
"NEAT: an Object Oriented High Level Synthesis Interface"
The Application of Genetic Algorithms to High-Level Synthesis
"A Path-based Technique for Estimating Hardware Runtime in HW/SW-cosynthesis"
"A tree-based scheduling algorithm for control dominated circuits,"
"A Unified Scheduling Model for High-Level Synthesis and Code Generation,"
Global Scheduling in High-Level Synthesis and Code Generation for Embedded Processors
"A Scheduling Algorithm for Conditional Resource Sharing - A Hierarchical Reduction Approach"
"Limits of Control Flow on Parallelism,"
"Time constrained Code Compaction for DSPs"
"An Efficient Resource-Constrained Global Scheduling Technique for Superscalar and VLIW processors,"
"Making Compaction-Based Parallelization Affordable,"
"Uniform Parallelism Exploitation in Ordinary Programs"
algorithms and complexity
"Percolation Based Synthesis,"
"A New Symbolic Technique for Control Dependent Scheduling,"
"Representing conditional branches for high-level synthesis applications,"
"Global Scheduling with Code-Motions for High-Level Synthesis Applications,"
"A Constructive Method for Exploiting Code Motion,"
"Efficient Superscalar Performance Through Boosting,"
"A resource sharing and control synthesis method for conditional branches,"
"Global scheduling independent of control dependencies based on condition vectors,"
--TR
Combinatorial optimization: algorithms and complexity
Region Scheduling
Global scheduling independent of control dependencies based on condition vectors
Representing conditional branches for high-level synthesis applications
Percolation based synthesis
Limits of control flow on parallelism
Efficient superscalar performance through boosting
An efficient resource-constrained global scheduling technique for superscalar and VLIW processors
A tree-based scheduling algorithm for control-dominated circuits
Global scheduling with code-motions for high-level synthesis applications
Time-constrained code compaction for DSPs
A path-based technique for estimating hardware runtime in HW/SW-cosynthesis
Efficient orthonormality testing for synthesis with pass-transistor selectors
Embedded system design
Control-flow versus data-flow-based scheduling
A <italic>global</italic> resource-constrained parallelization technique
Synthesis and Optimization of Digital Circuits
Making Compaction-Based Parallelization Affordable
A unified scheduling model for high-level synthesis and code generation
A Constructive Method for Exploiting Code Motion
Area and performance optimizations in path-based scheduling
--CTR
Aravind Vijayakumar , F. Brewer, Weighted control scheduling, Proceedings of the 2005 IEEE/ACM International conference on Computer-aided design, p.777-783, November 06-10, 2005, San Jose, CA
Steve Haynal , Forrest Brewer, Automata-Based Symbolic Scheduling for Looping DFGs, IEEE Transactions on Computers, v.50 n.3, p.250-267, March 2001
Apostolos A. Kountouris , Christophe Wolinski, Efficient scheduling of conditional behaviors for high-level synthesis, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.7 n.3, p.380-412, July 2002 | speculative execution;code generation;global scheduling;code motion;high-level synthesis |
329831 | Decision Analysis by Augmented Probability Simulation. | We provide a generic Monte Carlo method to find the alternative of maximum expected utility in a decision analysis. We define an artificial distribution on the product space of alternatives and states, and show that the optimal alternative is the mode of the implied marginal distribution on the alternatives. After drawing a sample from the artificial distribution, we may use exploratory data analysis tools to approximately identify the optimal alternative. We illustrate our method for some important types of influence diagrams. | Introduction
1.1 Decision Analysis by Simulation
Decision Analysis provides a framework for solving decision making problems under uncer-
tainty, based on finding an alternative with maximum expected utility. While conceptually
simple, the actual solution of the maximization problem may be extremely involved, e.g.,
when the probability model is complex, the set of alternatives is continuous, or when a sequence
of decisions is included. Therefore, only particular probability models are studied,
such as the multivariate Gaussian in Shachter and Kenley (1989). Inclusion of continuous
variables in simple problems is carried out through discretization (Miller and Rice 1983,
Smith 1991), through summaries of the first few moments and derivatives (Smith 1993),
or through approximations by means of Gaussian mixtures (Poland 1994). In complicated
problems, there may be no hope for an exact solution method and we may have to turn to
approximate methods, specifically simulation.
As observed in Pearl (1988, p311) and Cooper (1989), in principle any simulation method
to solve Bayesian networks (BN) may be used to solve decision problems represented by influence
diagrams (ID) by means of sequentially instantiating decision nodes and computing
expected values. Cooper notes that, for a given instantiation of the decision nodes, the
computation of the expected value at the value node can be reformulated as a computation
of a posterior distribution in an artificially created additional random node. The problem
of solving BNs is summarized, for example, in Shachter and Peot (1990). Exact algorithms,
e.g. using clique join trees (Lauritzen and Spiegelhalter 1988), cutset conditioning (Pearl
1986) or arc reversal (Shachter 1986, 1988) proved to be intractable in many real-world
networks, leading to approximate inference algorithms based on simulation methods. These
include short run algorithms, such as Logic Sampling (Henrion 1988); Likelihood Weighting
(Shachter and Peot 1990) and its improved modifications, Bounded Variance and AA algorithms
(Pradhan and Dagum 1996); and long run algorithms, using Markov chain Monte
Carlo methods like Gibbs sampling (Pearl 1987, Hrycej 1990, York 1992) or hybrid strategies
(Brewer et al. 1996).
However, as Matzkevich and Abramson (1995) note, we only have a couple of outlines of
simulation methods specifically for IDs in Jenzarli (1995) and Charnes and Shenoy (1996).
Whereas the first one combines stochastic dynamic programming and Gibbs sampling, the
latter simulates iid observations from only a small set of chance variables for each decision
node instead of using the entire distribution. Both become intractable when continuous
decision spaces are included.
In recent statistical literature the same problem, i.e., that of finding the optimal action
in a decision problem, has been considered in M-uller and Parmigiani (1996) and Carlin,
Kadane and Gelfand (1998), among others. Again, all these approaches use Monte Carlo
simulation to evaluate the expected utility of given instantiations of nodes.
1.2 Augmented Probability Simulation
In this paper we propose a scheme which differs in important ways from the above mentioned
approaches. Since they use simulation to evaluate expected utilities (losses) for given
instantiations of the decision nodes, they do not accomodate continuous variables, especially
decision variables, unless a discretization is carried out or the probability distributions are in
a conjugate framework. In contrast, we go a step further and define an artificial distribution
on all nodes, including the decision nodes. We show that simulation from this artificial augmented
probability model is equivalent to solving the original decision problem. The specific
strength of the proposed method is its generality. The algorithm can, in principle, accomodate
arbitrary probability models and utility functions, as long as it is possible to pointwise
evaluate the probability density and the utility function for any chosen value of all involved
nodes. Evaluation of the probability density up to a constant factor suffices. The idea of
augmenting the probability model to transform the optimization problem into a simulation
problem is not entirely new. For example, Shachter and Peot (1992) have proposed a similar
approach which involves augmenting the probability model to include the decision nodes and
thus transforms the original optimization problem into a simulation problem. But to the best
of our knowledge the approach described here is the first to solve this simulation problem by
systematically exploiting Markov chain Monte Carlo simulation methods recently developed
in the statistical literature.
The method starts by considering an artificial distribution on the space of alternatives
and states. The distribution is defined in such a way that its marginal on the space of
alternatives is proportional to the expected utility of the alternative and, consequently, the
optimal alternative coincides with the mode of the marginal. Then, the proposed simulation
based strategy follows these steps: (i) draw a sample from the artificial distribution; (ii)
marginalise it to the space of alternatives; and, (iii) find the mode of the sample as a way
of approximating the optimal alternative. A key issue is how to sample from the artificial
distribution. For that we introduce Markov chain Monte Carlo (MCMC) algorithms. See,
for example, Smith and Roberts (1993), Tierney (1994) or Tanner (1994) for a review of
MCMC methods.
Section 2 describes the basic strategy with a simple example. Section 3 is of a more
technical nature and provides generic methods to sample approximately from the artificial
distribution and identify the mode of the sample. Section 4 discusses application examples.
Section 5 compares our method with alternative schemes and identifies situations which call
for different approaches.
2 Basic Approach
Here we outline the basic approach. Assume we have to choose under uncertainty an alternative
d from a set A. The set of states ' is \Theta. We propose as optimal the alternative
d with maximum expected utility:
u(d; ')p d (')d'] ; where u(d; ') is the
utility function modeling preferences over consequences and p d (') is the probability distribution
modeling beliefs, possibly influenced by actions. When the problem is structurally
complicated, say a heavily asymmetric and dense, large influence diagram with continuous
non-Gaussian random variables, non quadratic utility functions and/or continuous sets of
alternatives at decision nodes, finding the exact solution might be analytically and computationally
intractable, and we might need an approximate solution method. We shall provide
such an approximation based on simulation.
Assume that p d (') ? 0, for all pairs (d; '), and u(d; ') is positive and integrable. Define an
artificial distribution over the product space A\Theta\Theta with density h proportional to the product
of utility and probability, specifically h(d; ') / u(d; ') \Delta p d ('): Note that the artificial distribution
h is chosen so that the marginal on the alternatives is h(d)
Hence, the optimal alternative d coincides with the mode of the marginal of the artificial
distribution h in the space of alternatives. As a consequence, we can solve the expected
utility maximization problem approximately with the following simulation based
(i) draw a random sample from the distribution h(d; '); (ii) convert it to a random sample
from the marginal h(d); and (iii) find the mode of this sample.
This augmented probability model simulation is conceptually different from other simulation
algorithms reviewed earlier. Simulation is not used to pointwise evaluate expected
utilities for each decision alternative. Instead, simulation generates the artificial probability
model h(\Delta) on the augmented state vector (d; ').
The key steps are (i) and (iii). For (ii), since we use simulation to generate from h(d; '),
we can get a marginal sample from h(d) by simply discarding the simulated ' values. For
(iii) we rely mainly on tools from exploratory data analysis, as we describe in Section 3.3.
For (i), we shall introduce generic Markov chain simulation methods. Their underlying idea
is simple. We wish to generate a sample from a distribution over a certain space, but cannot
do this directly. Suppose, however, that we can construct a Markov chain with the same
state space, which is straightforward to simulate from and whose equilibrium distribution is
the desired distribution. If we simulate sufficiently many iterations, after dropping an initial
transient phase, we may use the simulated values of the chain as an approximate sample
from the desired distribution. We shall provide several algorithms for constructing chains
with the desired equilibrium distribution, in our case the artificial distribution h, in Section
3.2. In the rest of this section we shall provide an algorithm and a simple example, so that
readers may grasp the basic idea, without entering into technical details. Readers familiar
with MCMC simulation may skip directly to Section 3.
The strategy we propose now is very simple, but may be only undertaken in limited cases.
Suppose the conditional distributions h(dj') and h('jd) are available for efficient random
variate generation. Then, we suggest the following scheme, which is known as the Gibbs
sampler in the statistical literature (Gelfand and Smith, 1990): (i) Start at an arbitrary
value
steps (ii) and (iii) until convergence is judged.
As a consequence of results in Tierney (1994) and Roberts and Smith (1994) we have:
Proposition 1 If the utility function is positive and integrable, p d (') ? 0, for all pairs (d; '),
and A and \Theta are intervals in IR n , the above scheme defines a Markov chain with stationary
distribution h.
It is impossible to give generally applicable results about when to terminate iterations in
Markov chain Monte Carlo simulations. It is well known that this is a difficult theoretical
problem, see, e.g., Robert (1995) and Polson (1996), who discuss approaches to find the
number of iterations that will ensure convergence in total variation norm within a given
distance to the true stationary distribution. However, practical convergence may be judged
with a number of criteria, see, e.g., Cowles and Carlin (1996) or Brooks and Roberts (1999).
Most of these methods have been implemented in CODA (Best et al, 1995), which we have
used in our examples. Once practical convergence has been judged, say after
we may record the next N iterations of the simulation output (d use
as an approximate sample from h(d). From that we may try to assess the mode.
We illustrate the above approach with an artificial example, adapted from Shenoy (1994).
Example 1. A physician has to determine a policy for treating patients suspected of
suffering from a disease D. D causes a pathological state P that, in turn, causes symptom
S to be exhibited. The physician observes whether a patient is
exhibiting the symptom. Based on this information, she either treats T the patient
(for P and D) or not 0). The physician's utility function depends on T; P and D,
as shown in Table 1. The value 0.001 was changed from the original value (0) to adapt
to the general result in Proposition 1. The probability of disease D (D = 1) is 0.1. For
patients known to suffer from D, 80% suffer from P 1). On the other hand, for
Table
1: The probability model p(D; the physician's utility function u(T;
h(d; D; S). The probabilities used in steps (ii) and (iii) of the Markov chain Monte Carlo
scheme described in the text are proportional to the entries in the appropriate column and
row, respectively, of the h section at the right of the table.
D
patients known not to suffer from D (D = 0), 15% suffer from P. For patients known to
suffer from P patients known not to suffer from
We assume that D and S are probabilistically independent given
P . To implement the proposed algorithm, we need to find the conditional distributions
h('jd) and h(dj'). In this case, is the decision
taken if the symptom is exhibited, and d 0 , if it is not exhibited. d means to treat
(not to treat) the patient. Let p(D; the probabilities
given in the above description. With h(d; D;
Our proposed method goes as follows: (i) Start at an arbitrary decision (d 0
steps (ii) and (iii) until convergence is judged.
Once convergence is judged, we record the next N iterations of the algorithm
use (d as an approximate sample from the marginal in d of the artificial
distribution. We leave out some values between those recorded to avoid serial correlation.
Since alternatives are finite in number, we just need to inspect the histogram to approximate
the mode. From a simulated sample of size 1000, we find that the optimal decision is
that is treat if symptom is present and not treat if symptom is absent.
Note in the example how the proposed augmented probability model simulation differs
from other simulation methods proposed for the solution of IDs. We use one simulation over
the joint ('; d) space to simulate from h(\Delta) instead of many small simulations to evaluate
expected utilities for each possible decision one at a time. Of course, the previous example is
extremely simple in that we are able to sample from h(dj') and h('jd), and, by inspection of
the histogram, we may approximate the modes. The next sections deal with more complex
cases.
3 Sampling from the Artificial Distribution
We shall provide here a generic method to sample from the artificial distribution h(\Delta). Typ-
ically, this distribution will not be straightforward to simulate from, requiring generation
from possibly high dimensional models, including complex probability and utility functions,
continuous decision and chance nodes, and possibly conditioning on observed data. MCMC
simulation schemes are the most commonly used methods known to accomodate such gen-
hence we choose them.
Given the enormous interest in IDs as a tool for structuring and solving decision problems,
see, e.g., Matzkevich and Abramson (1995), we concentrate on such structures. An ID is a
directed graph representation of a decision problem as a probability network with additional
nodes representing decisions and values. For notational purposes, we shall partition the set
of nodes into five subsets, differentiating three types of chance nodes: (i) Decision nodes
d, representing decisions to be made. (ii) Chance nodes, including random variables x
observed prior to making the decision, i.e., data available at the time of decision making;
not yet observed random variables y, i.e., data which will only be observed after making the
decisions; and unobservable random variables ', i.e., unknown parameters; (iii) One value
node u representing the utility function u(d; x; '; y). Figure 1 provides a simple generic ID for
our scheme. An ID is solved by determining a decision d with maximum expected utility.
y
x
d
\Phi-
oe
@
@ @R
'i
'i
\Deltaff
'/
'i
@
@
@ @
Figure
1: A generic influence diagram for our scheme.
This requires marginalizing over all chance nodes (y; '), conditioning on x, and maximizing
over d. See Shachter (1986) for a complete description and an algorithm to solve IDs.
The method we propose here is applicable to IDs with non-sequential structure, i.e.,
decision nodes must not have any chance nodes as predecessors which have distributions
depending, in turn, on other decision nodes. Except for some technical conditions there will
be no further requirements.
3.1 The Probability Model Defined by Influence Diagrams
An ID defines the conditional distributions p(xj'), p(') and p d (yj'), a joint distribution
on (x; '; y) via p d (x; ';
p(')p(xj')p d (yj'); for ('; y) given the observed nodes x. Typically, x and y are independent
given ', allowing the given factorization, and p(') does not depend on d. If a particular
problem does not fit this setup, modifications of the proposed algorithm are straightforward.
In the context of this probability model, solving the ID amounts to maximizing the expected
utility over d, where p d ('; yjx) is the relevant distribution to compute this expectation.
In summary, solving the ID amounts to finding
d
Z
We shall solve this problem approximately by simulation. Augment the probability measure
to a probability model for ('; d) by defining a joint p.d.f.
The mode of the implied marginal distribution h(d) /
R
corresponds to the optimal decision d . The underlying rationale of our method is to simulate
a Markov chain in ('; defined to have h('; d) as its asymptotic distribution. For
big enough t, the simulated values successive states of the simulated process
provide, approximately, a Monte Carlo sample from h('; d). Note that the simulation is
defined on an augmented probability model h(d; '; y) rather than on p d ('; y) for each possible
instantiation of the actions d, as traditional methods do. By considering the marginal distribution
of d t in this Monte Carlo sample, we can infer the optimal decision using methods
such as those discussed in Section 3.3.
The key issue is the definition of a Markov chain with the desired limiting distribution
h(:). For that, we capitalise on recent work in numerical Bayesian inference concerning the
application of Markov chain Monte Carlo methods to explore high dimensional distributions
which do not allow analytic solutions for expectations, marginal distributions, etc.
3.2 Markov Chain Monte Carlo Simulation
We shall provide a general algorithm which will be valid for all IDs satisfying the structural
conditions specified above and some minor technical conditions discussed below. The algorithm
we describe is of the Metropolis type (Tierney 1994): we generate a new candidate for
the states from a probing distribution, and then move to that new state or stay at the old
one according to certain probabilities. We do this transition in three steps, for d, ' and y.
We only require to be able to evaluate the utility function u(d; x; '; y) and the probability
distributions p d (yj'), p('), p(xj'), for any relevant d; x; '; y. This will typically be possible,
since the definition of the ID includes explicit specification of these distributions, i.e., the
modeler is likely to specify well-known distributions.
The scheme requires specification of probing distributions g 1 , g 2 and g 3 . The choice
of probing distributions g j (:j:) is conceptually arbitrary, with the only constraint that the
resulting Markov chain should be irreducible and aperiodic. As we shall argue, whenever
possible, we assume symmetric probing distributions, i.e., satisfying
example, g(ajb) could be a multivariate normal distribution N(b; \Sigma) for some \Sigma. Details
about the choice of probing distribution are discussed in the appendix. We then have:
Algorithm 1.
1. Start at values (d parameters and outcomes, and set
2. Let u
Generate a "proposal" ~
djd
d;
Compute
d;'
h(d
d (y
With probability a 1 , set d
d; otherwise, keep d
3. Let u
Generate a "proposal" ~
Compute
';y
With probability a 2 , set '
4. Let u
Generate a proposal ~
y).
Compute
u3
With probability a 3 set y
5. 1. Repeat steps 2 through 4 until chain is judged to have practically
converged.
This algorithm defines a Markov chain, with h('; d) as stationary distribution. The generality
of this algorithm comes at a price, namely possible slow convergence. Depending
on the application, long simulation runs might be required to attain practical convergence.
However, this fully general algorithm is rarely required.
Many problems allow simpler algorithms based on using p('jx) and p d (yj') to generate
proposals. Algorithm 2, given below, only requires a probing distribution g( ~
djd) for d, evaluation
of the utility function and algorithms to generate from p('jx) and p d (yj'). While
simulating from p d (yj') is typically straightforward, simulating from p('jx) is not. In gen-
eral, this distribution will not be explicitly specified in the ID, but needs to be computed
through repeated applications of Bayes formula, or several arc reversals in the language of
IDs. However, note that simulating from p('jx) amounts to solving the statistical inference
problem of generating from the posterior distribution on ' given the data x. Hence, we can
appeal to versions of posterior simulation schemes appropriate for a variety of important inference
problems recently discussed in the Bayesian literature (see, e.g., Smith and Roberts
1993; Tanner 1994; and Tierney 1994). Before starting the algorithm described below, we
generate a sufficiently large Monte Carlo sample from p('jx) by whatever simulation method
is most appropriate.
Algorithm 2.
1. Start at values (d
2. Evaluate u
3. Generate ( ~
d; ~
djd (i\Gamma1) )p ~
djd (i\Gamma1) )p( ~
'jx)p ~
d (~yj ~
4. Evaluate ~
d; x; ~
5. Compute
d; ~
h(d
d; ~
7. convergence is practically judged.
In step 3, generation of ' - p('jx) is done using the simulated Monte Carlo sample generated
Algorithm 3. The algorithm simplifies if x is missing in the ID, i.e., if no data is given at
the time of the decision. The associated Algorithm 3 would be stated as Algorithm 2, with
the proposal distribution in step 3 replaced by ( ~
d; ~
djd)p ~
d (~yj ~
Sampling from p d ('; will be feasible in general, since these distributions are
defined explicitly in the ID.
3.3 Finding the Optimal Solution
The MCMC simulation provides us with an approximate simulated sample f(d
y), from which we deduce an approximate sample (d
the marginal h(d). The mode of h(d) is an approximation of the optimal alternative.
In the case of discrete alternatives, the problem is simple since we only have to count the
number of times each element has appeared, and choose the one with the highest frequency.
It may be worthwhile retaining not one but several of the most frequent decisions, and study
them in further detail, as a way of conducting sensitivity analysis.
In the case of continuous alternatives, as a first approach we may use graphical exploratory
data analysis tools, especially with low dimensional decision vectors. When the
decision vector d is one- or two dimensional, we may produce the histogram (or a smooth
version) and inspect it to identify modes. For higher dimensional decision vectors d, we
propose to consider the problem as one of cluster analysis. Modes of h(d) correspond to d's
with higher density, which suggests looking for regions with higher concentration of sampled
d's. This leads us to compute a hierarchical cluster tree for the simulated points d t . Since
we are assuming h to be a density with respect to Lebesgue measure in IR n , and we are
interested in identifying regions where the optimal alternative might lie, we suggest using
complete linkage with Euclidean distance. Once we have a classification tree, we cut at a
certain height and obtain the corresponding clusters. The location of the largest cluster
indicates the area of the best decision. Again, as before, it may be useful to keep several
larger clusters and explore the corresponding regions. The result of course would depend
on the cutting height, but by exploring several heights we may be able to identify several
decisions of interest. We illustrate the approach in Section 4.2.
4.1 Example 2: A Medical Decision Making Problem
We illustrate the algorithm with a case study concerning the determination of optimal aphere-
sis designs for cancer patients undergoing chemotherapy. Palmer and M-uller (1998) describe
the clinical background and solve the problem by large scale Monte Carlo integration.
Between a pre-treatment and start of chemotherapy, stem cells (CD34) are collected to
allow later reconstitution of white blood cell components. Depending on the pre-treatment,
the first stem cell collection process (apheresis) is scheduled on the fifth or seventh day after
pre-treatment. A decision is to be made on which days between pre-treatment and treatment
we should schedule stem cell collections so as to (i) collect some target number of cells; and
(ii) minimize the number of aphereses. We have data on I = 22 past patients, and for the
first day of the new patient.
denote the observed CD34 count for patient i on
day t ij . Also, y
shall designate the i-th's patient data and
the combined data vector. Palmer and M-uller (1998) specify the following probability model
for this process. The likelihood is based on the observation that the typical profile of stem
cell counts over days shows first a rise after pre-treatment, reaches a maximum, and then
slowly declines back towards the base level, as shown in Figure 2. To model such shapes we
use a nonlinear regression model. Let g(t; a Gamma probability
density function with parameters chosen to imply mean and variance
matching e and s 2 and rescaled by
We use g(\Delta; e; s) to parametrize a nonlinear regression for the profiles through time of each
The prior model on the patient specific parameters is hierarchical. Patient i undergoes
one of two possible pre-treatments x i 2 f1; 2g, which serves as a covariate to specify the
first level prior: ' i - N(j x i
The hyperprior at the second level is common for both
cases: 2. The model is completed with a prior on V and oe
Figure
shows observed counts y ij and fitted profiles
typical patients. For a new patient I +1 denote with
) the (unknown)
stem cell counts on days t
. For a first day t 0 , we already have a count y h0 . Using
the notation introduced at the beginning of Section 3, is the observed
Figure
2: Three typical patients. The dashed lines connect the data points. The solid curve
plots the fitted profile, using the described probability model.
data vector, is the future data vector, and are
the unobservable parameters in the model. Given the typical profile, the optimal decision
will schedule aphereses for all days between some initial day d 0 and a final day d 1 , i.e., the
decision parameter is
Let A be the event of failing to collect a target number y of stem cells,
y g, where L h is the volume of blood processed at each stem cell collection for the new pa-
tient. Let n a the number of scheduled stem cell collections. The
utility function is u(d; x; '; c is the sampling cost and p a
penalty for underachievement of the target. We need to maximize over d the expected utility
R
Note that the probability model p(')p(x; yj') does not
depend on the decision nodes, but there is data x influencing the belief model. Since p('jx)
may be actually sampled with a Markov chain Monte Carlo method described in Palmer
and M-uller (1998), we use Algorithm 2 to solve the problem. To ensure a positive utility
function we add a constant offset to u(\Delta).
We found the optimal design d for a future patient with the above belief and preference
model when 10:0. For a patient undergoing treatment x first observation
5, the optimal apheresis schedule for the remaining six days was
found to be given by d the decision space is two dimensional, we can
do this by a simple inspection of the histogram. Figure 3 plots the estimated distribution
Figure
3: The grey shades show a histogram of the simulated d t for the medical problem.
Inspection of h(d) reveals the optimal decision at
4.2 Example 3: A Water Reservoir Management Problem
In R'ios Insua et al (1997), we describe a complex multiperiod decision analysis problem
concerning the management of two reservoirs: Lake Kariba (K) and Cahora Bassa (C).
Here we solve a simplified version using the proposed MCMC approach to simulate from the
augmented probability model.
We want to find, for a given month, optimal values to be announced for release from
K and C through turbines and spillgates, d k
respectively. The actual amounts of
water released depend on the water available, which is uncertain, since there is uncertainty
about inflows i k and i c to the reservoirs. There is a forecasting model for both i k and i c ,
the latter being dependent on the water released from K and the incremental inflows (inc),
which, in turn, depend on a parameter fi. The preference model combines utilities for both
K and C. Those for K depend on the energy deficit (def), the final storage (sto k ) and the
amount of water spilled (spi). Those for C depend on the energy produced (ene) and the
final storage (sto c ). Initial storages s k and s c have influence as well over actual releases.
Figure
4 shows the influence diagram representing the problem. Nodes with double border
are either known values or deterministic functions of their predecessors. They are required
to compute the value node u, but will not show up in the probability model. In terms of
rel
sto k
I c ene
sto c
'i
@
@
@
@
@
@
\Gamma\Psi
\Deltaff
\Phi \Phi \Phi \Phi \Phi \Phi \Phi \Phi*
-:
A
A
A
A
A
A
A
A
A
A
AU
\Deltaff
J-
-BBBBBBBN
'i
\Gamma\Psi
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
J-
\Omega \Omega \Omega \Omega \Omega \Omega \Omega OE
\Phi \Phi \Phi \Phi*
Z
Z
Z
Z Z~
ae
ae
ae
ae
ae=
@
@
@
@
@
@
@ @R
-A
A
A
A
A
A
A AU b
'i
'i
'i
'i
'i
'i
'i
ae-
---Figure
4: Influence diagram for the reservoir problem.
our notation, the problem includes four decision nodes
nodes i k and fi.
Figure
5 shows some profiles of the histogram of the simulated d t - h(d), generated by
Algorithm 3. The decision parameter is four dimensional. Hence we used a four dimensional
cells) to record a four dimensional histogram of the simulated
states. Simple inspection of the empirical distribution allows to read off the optimal release
at d
200g. The solution is based on 100,000
simulated values from the Markov chain Monte Carlo scheme. Figure 5 illustrates also
another feature of our method which is a simple sensitivity analysis procedure at no extra
cost. Darkness of Figure 5b suggests that expected utility is rather flat when releases through
turbines are fixed at their optimal values, hence suggesting insensitivity with respect to
changes in spill. On the other hand, Figure 5a, with just one dark area where the estimated
optimum is, suggests that expected utility is fairly peaked in release through turbines, and
hence very sensitive to changes in energy releases.
Alternatively, as discussed in Section 3.3, we consider a hierarchical cluster tree of the
simulation output. The dots in Figure 5 show the solution based on cutting a hierarchical
cluster tree of 1000 simulated values d - h(d) at height 2000 and finding the cluster with
the most members. The optimum is found at d
476g. This comes reasonably close to the optimum estimated earlier.
5.1 Comparison with Alternative Schemes
The scheme described in Algorithms 1, 2 and 3 transforms the original expected utility maximization
problem (1) into a simulation problem. Our scheme is very generic, in the sense
of accomodating arbitrary probability models, be they discrete or continuous, and utility
(a) f(d k
Figure
5: (a) Expected utility as a function of release for energy (d k
spill fixed at
the optimal levels; and (b) as a function of spill (d k
release through turbines fixed at
the optima. The diamond indicates the optimal decision. The dots indicate the simulations
in the largest cluster of the hierarchical clustering tree cut at height 2000.
functions, as long as the probability density (or probability mass function) and the utility
function are pointwise evaluable. The main difference with other simulation schemes earlier
considered in the literature is that instead of using simulation to evaluate expected utilities
(losses) for each possible instantiation of decisions, we use simulation from an artificial
auxiliary model which augments the original probability model to include an artificial distribution
over decision nodes. Whether one or the other approach is more efficient depends on
the specifics of the considered decision problem. No general comparisons are possible. Even
in specific examples, performance will depend heavily on arbitrary choices like the amount
of discretization, which is necessary for many methods; run length of the involved Monte
Carlo simulations; chosen MCMC scheme, etc. However, some general observations about
the relative efficiency of the methods are possible.
In problems with few alternatives, analytic solutions using methods like arc reversal
(Shachter 1986), and simulation methods which use simulation to pointwise evaluate expected
utilities, like Likelihood Weighting (Shachter and Peot 1990), are typically more
efficient than simulation over the auxiliary probability model. Bielza and Shenoy (1998) discuss
a decision problem (the "reactor problem") with 6 possible actions, and chance nodes
with less than 10 possible joint outcomes. An exact solution using Shachter's (1986) algorithm
requires one arc reversal and the largest state space used during the solution phase
contains 4 variables. By comparison, we implemented the same example using augmented
probability simulation, following Algorithm 3. We used 100,000 iterations in the MCMC
simulation. The computational effort of one iteration is approximately comparable to one
arc reversal. Thus the exact solution is clearly far more efficient in terms of computing time.
Alternatively, consider simulation to compute the expected utility of each of the six possible
actions, using, for example, Likelihood Weighting. Considering the involved numerical standard
errors, Monte Carlo simulation sizes of around 1000 simulations for each alternative
decision would be adequate. Thus, also Likelihood Weighting dominates simulation from the
augmented probability model.
In problems where the optimal decision is to be computed conditional on some already
available data x the comparison changes, especially if the posterior distribution of the unknown
parameters is significantly different from the initial prior distribution, i.e., under low
prior probability for the evidence x. Consider, for example, the application reported in Section
4.1, which is not amenable to exact methods. Using Monte Carlo simulation to compute
expected utilities for alternative decisions, we can no longer generate independent samples.
Following Jenzarli's (1995) proposal we could use Gibbs sampling to compute expected util-
ities. Depending on the specific choices of the implemented MCMC scheme and termination
criteria, one would typically use on the order of 10,000 iterations (Palmer and M-uller 1998).
Discretizing the sample space, one could, in principle, also use Logic Sampling (Henrion,
1988). However, Logic Sampling would not be advisable for this problem since the fraction
of simulated experiments which generate variables corresponding to the actual observations
would be close to zero (i.e., p(x
in the notation of Shachter and Peot, 1990). For
similar reasons, Likelihood Weighting (Shachter and Peot 1990) would fail. Since only leaf
nodes are observed, the sample scores would be proportional to the likelihood function, i.e.,
the scheme would amount to importance sampling using the prior probability model as importance
sampling function. This can, however, be addressed using bounded variance type
algorithms as discussed, for example, in Pradhan and Dagum (1996).
Finally, many decision problems involve continuous decision variables, like the example
considered in Section 4.2. Continuous decision parameters create no problem for simulation
from the augmented probability model, but would not allow a straightforward application
of any scheme based on evaluating expected utilities for one decision at a time. Even if
discretization was used, say on grid, the resulting number of alternative
actions renders such schemes difficult to use.
5.2 Conclusion
Complex decision problems may render impossible the application of exact methods to obtain
optimal decisions. As a consequence, we should look for approximation methods, including
simulation.
We have proposed a simulation based strategy for approximating optimal decisions in a
decision analysis. Our experiments and examples suggest that this approach may be very
powerful. Implementation of the algorithms is fairly straightforward based on the schemes
provided. Specific cases may require simple modifications such as the ones suggested in
Section 3.2. The exploration of the sample in search for modes may be done with standard
statistical software. As we mentioned in the discussion of Example 3, one feature of our
method is the provision of simple sensitivity analysis features, at no extra cost.
A number of challenging problems remain, particularly perhaps, the extension of our
scheme to sequential decisions. The straightforward approach of expanding the model to
non-sequential normal form may only be applied when the number of decision nodes is small.
Another challenging problem would be to develop a computational environment based on
our approach. It would be also interesting to develop further methods to look for modes in
multivariate settings.
Similar ideas may be pursued to solve traditional statistical optimal design problems.
From a formal point of view, an optimal design problem can be described as a stochastic
optimization problem (1). This is explored in Clyde, M-uller and Parmigiani (1995) for the
special case of Algorithm 3 with continuous sample spaces and non-sequential setup.
Appendix
Implementation
The choice of the probing distributions g j (:j:) in Algorithm 1 is conceptually arbitrary, with
the only constraint that the resulting Markov chain be irreducible and aperiodic.
In the statement and proofs of the proposed algorithms, we assumed g to be symmetric
in its arguments, i.e., is a continuous parameter, we propose to
use a normal kernel g( ~
appropriately chosen covariance matrix \Sigma,
for example, a diagonal matrix with diagonal entries corresponding to reasonable step sizes
in each of the decision parameters. Good values for the step size can be found by trial and
error with a few values. In a particular setup, Gelman, Roberts and Gilks (1996) show that
the optimal choice of step size should result in average acceptance probabilities around 25%,
and similarly, for other parameters.
If d is discrete, a simple choice for g( ~
djd) could generate
0.5. Of course, many other problem specific choices are possible. In Example 2, e.g., we
define choosing with probability 1/6 one of six possible moves: (i) increase
d 0 and d 1 by 1 day; (ii) decrease d 0 and d 1 by 1; (iii) increase d 0 by 1; (iv) decrease d 0 by
etc.
Should symmetry of g be violated, an additional factor g(dj ~
djd) would be added in
the expressions for acceptance probabilities. This would correspond to Metropolis-Hastings
steps, rather than Metropolis steps. Convergence proofs for the proposed scheme are simple,
based on results in Tierney (1994) and Roberts and Smith (1994).
Acknowledgments
Research suported by grants from the National Science Foundation,
CAM, CICYT and the Iberdrola Foundation. Parts of it took place while Peter M-uller was
visiting UPM and David R'ios Insua was visiting CNR-IAMI. We are grateful to discussions
with Mike Wiper.
--R
"A comparison of graphical techniques for asymmetric decision problems"
"A comparison of hybrid strategies for Gibbs sampling in mixed graphical models,"
"Assesing convergence of Markov chain Monte Carlo algo- rithms,"
"Approaches for optimal sequential decision analysis in clinical trials,"
"A forward Monte Carlo method for solving influence diagrams using local computation,"
"Exploring expected utility surfaces by Markov changes,"
"A method for using belief networks as influence diagrams,"
"Markov chain Monte Carlo convergence diagnostics: a comparative review,"
"Efficient Metropolis jumping rules,"
Bayesian Statistics 5
"Sampling based approaches to calculating marginal densities,"
"Propagating uncertainty in bayesian networks by probabilistic logic sampling"
"Gibbs sampling in Bayesian networks,"
Solving Influence Diagrams using Gibbs sampling
"Local computations with probabilities on graphical structures and their application to expert systems,"
"Decision analytic networks in Artificial Intelligence,"
"Discrete approximations of probability distributions,"
"Optimal design via curve fitting of Monte Carlo experi- ments,"
"Bayesian Optimal Design in Population Models of Hematologic Data,"
"Fusion, propagation and structuring in belief networks,"
Probabilistic reasoning in intelligent systems
Decision Analysis with continuous and discrete variables: a mixture distribution approach
"Convergence of Markov chain Monte Carlo algorithms,"
"Optimal Monte Carlo estimation of belief network inference,"
"Bayesian methods in reservoir opera- tions: the Zambezi river case,"
"Convergence control methods for Markov chain Monte Carlo algorithms,"
"Simple conditions for the convergence of the Gibbs sampler and Metropolis-Hastings algorithms,"
"Probabilistic inference and influence diagrams,"
"Gaussian Influence Diagrams,"
"Simulation approaches to general probabilistic inference on belief net- works,"
"Decision making using probabilistic inference methods,"
"A comparison of graphical techniques for decision analysis,"
"Bayesian computational methods,"
"Moment methods for Decision Analysis,"
"Bayesian computation via the Gibbs sampler and related Markov chain Monte Carlo methods,"
Tools for Statistical Inference
"Markov chains for exploring posterior distributions (with discussion),"
"Use of the Gibbs sampler in expert systems,"
--TR | simulation;decision analysis;influence diagrams;optimal design;Markov Chain Monte Carlo |
330333 | Probabilistic Loop Scheduling for Applications with Uncertain Execution Time. | AbstractOne of the difficulties in high-level synthesis and compiler optimization is obtaining a good schedule without knowing the exact computation time of the tasks involved. The uncertain computation times of these tasks normally occur when conditional instructions are employed and/or inputs of the tasks influence the computation time. The relationship between these tasks can be represented as a data-flow graph where each node models the task associated with a probabilistic computation time. A set of edges represents the dependencies between tasks. In this research, we study scheduling and optimization algorithms taking into account the probabilistic execution times. Two novel algorithms, called probabilistic retiming and probabilistic rotation scheduling, are developed for solving the underlying nonresource and resource constrained scheduling problems, respectively. Experimental results show that probabilistic retiming consistently produces a graph with a smaller longest path computation time for a given confidence level, as compared with the traditional retiming algorithm that assumes a fixed worst-case and average-case computation times. Furthermore, when considering the resource constraints and probabilistic environments, probabilistic rotation scheduling gives a schedule whose length is guaranteed to satisfy a given probability requirement. This schedule is better than schedules produced by other algorithms that consider worst-case and average-case scenarios. | Introduction
In many practical applications such as interface systems, fuzzy systems, articial intelligence systems, and
others, the required tasks normally have uncertain computation times (called uncertain or probabilistic
tasks for brevity). Such tasks normally contain conditional instructions and/or operations that could take
dierent computation times for dierent inputs. A dynamic scheduling scheme may be considered to address
the problem; however, the decision of the run-time scheduler which depends on the local on-line knowledge
may not give a good overall schedule. Although many static scheduling techniques can thoroughly check
for the best assignment for dependent tasks, existing methods are not able to deal with such uncertainty.
Therefore, either worst-case or average-case computation times for these tasks are usually assumed. Such
assumptions, however, may not be applicable for the real operating situation and may result in an inecient
schedule.
For iterative applications, statistics for the uncertain tasks are not dicult to collect. In this paper,
two novel loop scheduling algorithms, probabilistic retiming (PR) and probabilistic rotation scheduling
(PRS), are proposed to statically schedule these tasks for non-resource (assume unlimited number of target
processors) and resource constrained (assume limited number of target processors) systems respectively.
These algorithms expose the parallelism of the probabilistic tasks across iterations as well as take advantage
of the inherent statistical data. For a system without resource constraints, PR can be applied to optimize the
input graph (i.e., reduce the length of the longest path of the graph such that the probability of the longest
path computation time being less than or equal to some given computation time, c; is greater than or equal
to a given condence probability ). The resulting graph implies a schedule for the non-resource constrained
system where the longest path computation time determines its schedule length. On the other hand, the
PRS algorithm is used to schedule uncertain tasks to a xed number of multiple processing elements. It
produces a schedule length from the given graph and incrementally reduces the length so that the probability
of it being less than the previous length is greater than or equal to the given condence probability.
In order to be compatible with the current high performance parallel processing technology, we assume
that synchronization is required at the end of each iteration. Such a parallel computing style is also known
as synchronous parallelism [10, 19]. Both PR and PRS take an input application which can be modeled
as a probabilistic data-ow
graph (PG), which is a generalized version of a data-ow
graph (DFG) where a
node corresponds to a task (a collection of statements), and a set of edges representing dependencies between
these tasks and determine a schedule. The loop-carried dependences (dependency distances) between tasks
in dierent iterations are represented by short bar lines on the corresponding edges. Since the computation
times of the nodes can be either xed or varied, a probability model is employed to represent the timing of
the tasks.
Figure
1(b) shows an example of a PG consisting of 4 nodes. Note that such a graph models the code
segment presented in Figure 1(a) where, for example, A in the PG corresponds to A 1 and A 2 of the code
segment. Two bar lines on the edge between nodes D and A represent the dependency distances between
these two nodes. The computation time of nodes A and C are known to be xed (2 time units). In this code,
the uncertainty occurs in the computation of nodes B and D. Assume that each arithmetic operation and
the assignment operation (=) take 1 time unit. Furthermore, the computation time of the comparison and
random number generating operations are assumed negligible. Hence, it may take either 4 or 2 time units
to execute node B. Put another way, about 20% of the time (51 out of 256), statement B 2 will be executed
s
and node B will take 4 time units; otherwise node B takes only 2 time units (B 3 has only one operation).
Likewise, approximately 25% (64 out of 256), node D takes 4 time units, and about 75%, it will take 2 time
units. Each entry in Figure 1(c) shows a probability associated with each node's possible computation time
(the probability distribution). By taking into account these varying timing characteristics, the proposed
technique can be applied to a wide variety of applications in high-level synthesis and compiler optimization.
(a) Code segment
A
(b) PG
Time Nodes
(c) Timing information
A
(d) Retimed PG
Figure
1: A sample code segment, the corresponding PG and its computation time, and the retimed graph.
Considerable research has been conducted in the area of nding a schedule of a directed-acyclic graph
(DAG) for multiple processing systems. (Note that DAGs are obtained from DFGs by ignoring edges of a
containing one or more dependency distances.) Many heuristics have been proposed to schedule DAGs,
e.g., list scheduling, graph decomposition [11, 13] etc. These methods, however, consider neither exploring
the parallelism across iterations nor addressing the problems of probabilistic tasks.
For instruction level parallelism (ILP) scheduling, trace scheduling [9] is used to globally schedule DAGs
by rearraging some operations in the graphs. Percolation scheduling is used in a development environment [1]
for microcode compaction, i.e., parallelism extraction of horizontal microcode. Nevertheless, the graph model
used in these techniques does not re
ect the uncertainty in node computation times. In the class of global
cyclic scheduling, software pipelining [16] is used to overlap instructions, whereby the parallelism is exposed
across iterations. This technique, however, expands the graph by unfolding or unrolling [22] it resulting in
a larger code size. Loop transformations are also common techniques used to construct parallel compilers.
They restructure loops from the repetitive code segment in order to reduce the total execution time of the
schedule [2, 3, 20, 27, 28]. These techniques, however, do not consider that the target systems have limited
number of processors or that task computation times are uncertain.
Modulo scheduling [24{26] is a popular technique in compiler design for exploiting ILP in loops which
results in optimized codes. This framework species a lower bound, called initiation interval (II), to start
with and strives to schedule nodes based on such knowledge. Much research was introduced to improve
and/or expand the capability of modulo scheduling. For example, research was presented which improved
modulo scheduling by producing schedules while considering limited number of registers [7, 8, 21]. In [17],
a combination of modulo scheduling and loop unrolling was introduced and applied in the IMPACT compiler
[4]. These ILP approaches, however, are limited to solving problems without considering uncertain
computation times (probabilistic graph model).
Some research considers the uncertainty inherit in the computation time of nodes. Ku and De Micheli [14,
15] proposed a relative scheduling method which handles tasks with unbounded delays. Nevertheless, their
approach considers a DAG as an input and does not explore the parallelism across iterations. Furthermore,
even if the statistics of the computation time of uncertain nodes is collected, their method will not exploit
this information. A framework that is able to handle imprecise propagation delays is proposed by Karkowski
and Otten [12]. In their approach, fuzzy set theory [29] was employed to model the imprecise computation
times. Although their approach is equivalent to nding a schedule of imprecise tasks to a non-resource
constrained system, their model is restricted to a simple triangular fuzzy distribution and does not consider
probability values.
For scheduling under resource constraints, the rotation scheduling technique was presented by Chao,
LaPaugh and Sha [5, 6] and was extended to handle multi-dimensional applications by Passos, Sha and
Bass [23]. Rotation scheduling attempts to pipeline a loop by assigning nodes from the loop to the system
with a limited number of processing elements. It implicitly uses traditional retiming [18] in order to reduce
the total computation time of the nodes along the longest paths (also called the critical paths), in the DFG.
In other words, the graph is transformed in such a way that the parallelism is exposed but the behavior of
the graph is preserved. In this paper, the rotation scheduling technique is extended so that it can deal with
uncertain tasks.
Since the computation time of a node in a PG is a random variable, the total computation time of this
graph is also a random variable. The concept of a control step (the synchronization of the tasks \within"
each iteration) is no longer applicable. A schedule conveys only the execution order or pattern of the
tasks being executed in a functional unit and/or between dierent units. In order to compute the total
computation time of this ordering, a probabilistic task-assignment graph (PTG) is constructed. A PTG is
obtained from a PG in which non-zero dependency distance edges are ignored and each node is assigned to
a specic functional unit in the system. The PTG also contains additional edges, called
ow-control edges
where a connection from u to v means that u is executed immediately before v using the same functional
unit. Note that in the non-resource constrained scenario, the PTG will be the DAG portion of the PG (a
subgraph that contains only no dependency distance edges).
Let us use the example in Figure 1(b). Assume that the term longest path computation time entails nd-
ing the maximum of the summation of computation times of nodes along paths which contain no dependency
distances. After examining all possible longest paths of this graph, it is likely (60%) that its longest path
computation time is less than or equal to 8. The details of how this value is determined is given in Section 3.
Note that if all nodes in this graph are assigned their worst-case values, the longest path computation time
(or schedule length for non-resource constrained systems) of this graph will be 10. One might wish to reduce
the longest path of this graph in nearly all cases, for example reducing the chance of the clock period being
greater than 6. By applying probabilistic retiming, the longest path computation time of the graph may be
improved with respect to the given constraint. The modied graph after retiming is shown in Figure 1(d).
The longest path computation time of this graph is less than than or equal to 6 with 20% chance.
If we need to schedule nodes from the PG to two homogeneous functional units, a possible PTG can be
constructed as shown in Figure 2(a). Since the input graph is cyclic, an execution pattern of this PTG is
repeated and the synchronization is applied at the end of each iteration as shown in Figure 2(a). The solid
edges in this PTG represent those zero dependency distance edges, called dependency edges, from the input
graph (see Figure 1(b)). In this gure, nodes A; B and D are assigned to PE 0 and node C is bound to PE 1 .
Note that D is implicitly executed after A; therefore, the direct edge from A to D from the original input
s
graph can be omitted. A corresponding static schedule which shows only one iteration from the execution
pattern is shown in Figure 2(c).
A
(a) The PTG
(b) Initial execution pattern
(c) Schedule
Figure
2: An example of PTG, its corresponding repeated pattern and the static execution order.
The resulting longest path computation time of the PTG is less than 9 units with 90% certainty. This
longest path timing and its probability are also known as a schedule length for resource constrained systems.
We can improve the resulting schedule length by applying our probabilistic rotation scheduling algorithm
to the PG and its PTG. In this case the algorithm rst selects the root node A to be rescheduled. Then
one dependency distance from the incoming edges of node A is moved to all its outgoing edges. Figure 3(a)
shows the resulting transformation graph of the PG. This new graph will be used as a reference to later
update the PTG. The new execution pattern is equivalent to reshaping the iteration window as presented
in
Figure
3(b).
A
(a) Rotate A
(b) Reshaping iteration window
Figure
3: The corresponding retimed PG and the repeated pattern after changing iteration window.
By applying the PRS algorithm, node A from the next iteration (see Figure 3(b)) is introduced to the
static execution pattern. Note that node A has no inter-iteration dependencies associated with it. Therefore,
A can be rescheduled to any available functional unit. One possible schedule is to assign node A immediately
after node C in PE 1 . The resulting PTG and the new execution order are shown in Figures 4(a) and 4(b)
respectively. The dotted arrow from C to A in this new PTG represents the
ow-control edge. For this
PTG, the resulting schedule length will be less than 7 with higher than 90% condence.
The remainder of this paper is organized as follows. Section 2 presents the graph model used in this
work. Required terminology and fundamental concepts are also presented. Section 3 discusses probabilistic
retiming and the algorithm for computing a total computation time of a probabilistic graph. The probabilistic
rotation scheduling algorithm and the supported routines will be discussed in Section 4. Experimental results
are discussed in Section 5. Finally, Section 6 draws conclusions of this research.
s
A
(a) PTG
(b) Static execution order
Figure
4: The resulting PTG and its execution order after rescheduling A.
Preliminaries
In this section, the graph model which is used to represent tasks with uncertain computation times is
introduced. Terminology and notations relevant to this work are also discussed. We begin by examining a
DFG that contains tasks with uncertain computation time which can be modeled as a probabilistic graph
(PG). The following gives the formal denition for such a graph.
Denition 2.1 A probabilistic graph (PG) is a vertex-weighted, edge-weighted, directed graph Ti,
where V is the set of vertices representing tasks, E is the set of edges representing the data dependencies
between vertices, d is a function from E to the set of non-negative integers, representing the
number of dependency distance on an edge, and T v is a random variable representing the computation
time of a node v 2 V.
Note that traditional DFGs are a special case of PGs where all probabilities equal one. Each vertex
is weighted with a probability distribution of the computation time, given by T v , where T v is a discrete
random variable corresponding to the computation time of v such that
8x
1. The notation
probability that random variable T assumes value x". The probability distribution
of T is assumed to be discrete in this paper. The granularity of the resulting probability distribution, if
necessary, depends on the needed degree of accuracy.
An edge e 2 E from u to v, u; v 2 V, is denoted by u e
v and a path p starting from u and
ending at v is indicated by the notation u
v: The number of dependency distances of path p (d(p)),
As an example, Figure 1(b) has the set of edges
Ag. The number of dependency distances on each edge
2.
The execution order or execution pattern of a PG are determined by the precedence relations in the
graph. During one iteration of the graph each vertex in the execution order is computed exactly one time.
Multiple iterations are identied by index i, starting from 0. Inter-iteration dependencies are represented by
weighted edges or dependency distances. For any iteration j, an edge e from u to v with dependency distance
conveys that the computation of node v at iteration j depends on the execution of node u at iteration
An edge with no dependency distances represents a data dependency within the same iteration.
A legal data
ow graph must have strictly positive dependency distance cycles, i.e., the summation of the
along any cycle cannot be less than or equal to zero.
2.1 Retiming overview
Retiming operations rearrange registers in a circuit or dependency distances in a data-ow
graph in such a
way that the behavior of the circuit is preserved while achieving a faster circuit. Traditionally, retiming [18]
optimizes a synchronous circuit (graph) ti which has non-probabilistic functional elements,
i.e., each of the vertices associated with a xed numerical timing value. The optimization goal is
normally to reduce the clock period or cycle period (G) (also known as longest path computation time).
The cycle period represents the execution time of the longest path (referred to as the critical path) that
has all zero dependency distance edges. It is dened by the equations
Retiming of a graph ti is a transformation function from vertices to the set of integers,
Z. The retiming function describes the movement of dependency distances with respect to the
vertices so as to transform G into a new graph G represents the number of
dependency distances on the edges of G r . The positive (or negative) value of the retiming function determines
the movement of the dependency distances. During retiming the same number of dependency distances is
pushed from all incoming (outgoing) edges of a node to all outgoing (incoming) edges. If a single dependency
distance is pushed from all incoming edges of node u 2 V to all outgoing edges of node u, then
Conversely, if one dependency distance is pushed from all outgoing to all incoming edges of u, then
The absolute value of the retiming function conveys the number of dependency distances that are pushed.
An algorithm to nd a set of retiming functions to minimize the clock period of the graph presented in [18]
is a polynomial time algorithm which has the time complexity of O(jV jjEj
log jV j).
Consider Figure 5(a) which illustrates a simple graph with four vertices, A; B; C and D: The numbers
next to the vertices in the gure represent the required computation times. Figure 5(b) represents a retimed
version of Figure 5(a) where In this case, the movement of
dependency distances is as follows: is equivalent to removing two dependency distances from
the incoming edge of vertex A, D e
adding them onto edges A e
D: The
retiming functions for nodes C and B are 1. This means that one dependency distance from
A e
pushed through vertex B to edge B e
Similarly, one dependency distance from edge A e
is pushed through vertex C to C e
An equivalent set of retimings in Figure 5(b) is
This equivalent set of retimings produces the same graph by pushing the
dependency distances backward through nodes D; B and C; instead of forward through nodes A; B and C:
The dotted lines in Figure 5(a) represent the critical path of the graph, for which
the critical path becomes illustrated by the dotted line in Figure 5(b).
The following summarizes some essential properties of the retiming transformation.
1. r is a legal retiming if d r (e) 0; 8e 2 E.
2. For an edge u e
3. For a path u
4. In any directed cycle (l) of G and G r , d r
guarantees that the retimed graph will not have any edge containing a negative number of
dependency distances. Properties 2 and 3 explain the movement of such distances. If r(v), v 2 V, has a
A
(a) Before
(b) After
Figure
5: Retiming transformations (before and after retiming) where dotted edges represent the critical
path.
positive value, the distances will be deleted from the incoming edge(s) of v and inserted onto the outgoing
edge(s), and vice versa if r(v) has the negative value. Finally, Property 4 ensures that the number of
dependency distances in any loop of the graph remains constant. That requires that all cycles have at least
one dependency distance. Since retiming is an optimization technique which is subject to unlimited number
of target resources, the resulting longest path computation time after the transformation is the underlying
schedule length. Consider only a DAG part of the retimed graph where edges with non-zero dependency
distances in the retimed graph are ignored. The iteration boundaries of this schedule will be at the root
nodes (beginning of the iteration) and at the leaf nodes (end of the iteration).
2.2 Rotation scheduling
In [5], Chao, LaPaugh and Sha proposed an algorithm, called rotation scheduling, which uses the retiming
algorithm to deal with scheduling a cyclic DFG under resource constraints. The input to the rotation
scheduling algorithm is a DFG and its corresponding static schedule, i.e., a synchronized order of the nodes
in the DFG. Rotation scheduling reduces the schedule length (the number of control steps needed to execute
one iteration of the schedule) by exploiting the parallelism between iterations. This is accomplished by
shifting the scope of a static schedule in one iteration, called the iteration window, down by one control
step. Looking at a static iteration, rotation scheduling analogously rotates tasks from the top of the schedule
of each iteration down to the end. This process is equivalent to retiming those tasks (nodes in the DFG) in
which one dependency distance will be deleted from all their incoming edges and added to all their outgoing
edges resulting in an intermediate retimed graph. Once the parallelism is extracted, the algorithm reassigns
the rotated nodes to new positions so that the schedule length is shorter.
As an example, the cyclic DFG in Figure 6(a) is to be scheduled using two processing elements. Figure
presents one possible static schedule for such a graph. By using rotation scheduling, this schedule
can be optimized. First, the algorithm uses node A from the next iteration. The original graph is retimed by
dependency distance from E e
A is moved to all outgoing edges of A (see Figure 6(c)).
By doing so, node A now can be executed at any control step in this new iteration window. Assume that
rotation scheduling uses a re-mapping strategy that places node A immediately after node C in PE 1 . The
resulting static schedule length is then reduced by one control step as shown in Figure 6(d). In Section 4,
the concept of the schedule length and the re-mapping strategy will be extended to handle probabilistic
s
inputs.
A D
(a) Cyclic DFG
Iter. i th Iter.
(b) Static schedule
A
(c) Retimed
(d) Resulting schedule
Figure
An example to present how rotation scheduling optimizes the underlying schedule length.
3 Non-resource constrained scheduling
Assuming there are innite available resources, a PG can be optimized with respect to a desired longest path
computation time and condence level. In eect, this is an attempt to reduce the longest path computation
time of the graph. The distribution of dependency distances in the PG is done according to a probabilistic
timing constraint where the probability of obtaining the timing result (longest path computation time) being
less than or equal to a given value c is greater than some condence probability value . This resulting
timing information is essentially the schedule length of the non-resource constrained problem. This section
presents an ecient algorithm for optimizing a probabilistic graph with respect to a desired computation
time (c) and its corresponding condence probability (). In order to evaluate the modied graph, we
need to know the probability distribution associated with its computation time. The remaining subsections
discuss these issues.
3.1 Computing maximum reaching time
Let G dag be the DAG portion (the subgraph that has only edges with no dependency distances) of a
probabilistic graph G. Assume that two dummy nodes, v s and vd , are added to G dag where v s connects
to all source nodes (roots) and vd is connected by all sink nodes (leaves). Traditionally, the longest path
computation time of a graph is computed by maximizing the summation of the computation times of the
nodes along the critical (longest) paths between these dummy nodes. Likewise, for a probabilistic graph we
can compute the summation of the computation time for each path from v s to vd in the graph. In this case
the largest summation value is called the maximum reaching time, or mrt, of the graph. The mrt of a PG
exhibits a possible longest path computation time of the graph and its associated probability. Therefore,
unlike the traditional approach, the summation and maximum functions of the computation time along the
paths in a PG become functions of multiple random variables.
s
To compute an mrt of a PG, we need to modify the graph so that v s and vd are connected to the
DAG portion of the original graph. Formally, a set of zero dependency distance edges is used to connect
vertex v s to all roots, and to connect all leaves to vertex vd . Since it is non-trivial to eciently compute
a function of dependent random variables, Algorithm 1 computes the mrt(G) assuming that the random
variables are independent. This algorithm traverses the input graph in a breadth-rst fashion starting from
v s and ending at vd . In general, the algorithm accumulates the probabilistic computation times along each
traversed path. When it reaches a node that has more than one parent, all the values associated with its
parents are maximized.
Algorithm 1 Calculate maximum reaching time of graph G
Require: probabilistic graph PG
Ensure:
2:
e
e
3: 8
4: while Queue 6= ; do
5: get node u from top of the Queue
7: for all u e
do
8: decrement the incoming degree of node v by one
10: if incoming degree of node v becomes 0 then
11: insert node v into the Queue
12: end if
13: end for
14: end while
Lines 1 and 2 produce DAG G 0 from G containing only edges e 2 E, with and the additional
zero dependency distance edges connecting v s to every root node v 2 V r of G and connecting every leaf node
l of G to vd . Line 3 initializes the temp mrt (v s ; u) value for each vertex u in the new graph and sets the
computation time of T vs and T vd to zero. Lines 4{12 traverse the graph in topological order and compute
the mrt of each v with respect to v s (temp mrt (v s ; vd )). Note that the temp mrt for node v with respect to v s
is originally set to zero. It stores the current maximum computation time of all node v's visited parents.
When the rst parent of v is dequeued, v has its indegree reduced by one (Line 8) and temp mrt is updated
(Line 9). Vertex v's other parents are in turn dequeued, and the process is repeated. Eventually, the last
parent of node v will be dequeued and maximized. At this point, node v will be inserted into the queue
since all parents have been considered, i.e., indegree of v equals zero (Line 10). Node v will be eventually
dequeued by Line 5. Line 6 will then add T v to the temp mrt of node v producing the nal mrt with respect
to all paths reaching node v.
Noting that the initial computation times are integers and the probabilities associated with these times
being greater than the given value c are accumulated as one value in the algorithm. Only O(c
need to be stored for each vertex. Therefore, the time complexity for calculating the summation (Line 6), or
the maximum (Line of two vertices is O(c 2 ). Since the algorithm computes the result in a breadth rst
s
fashion, the running time of Algorithm 1 is O(c 2 jVjjEj), while the space complexity is bounded by O(cjV j).
3.2 Probabilistic retiming
Using the concept of the mrt, Algorithm 2 presents the probabilistic retiming algorithm which reduces the
longest path computation time of the given PG to meet a timing constraint. Such a constraint is that
c is the desired longest path computation time of the graph and is the
condence probability. This requirement can be rewritten as Pr(mrt(v s
The algorithm retimes vertices whose computation time being greater than c has a probability larger than
the acceptable probability value. Initially, the retiming value for each node is set to zero and non-zero
dependency distance edges are eliminated. Then, v s is connected to the root-vertices of the resulting DAG
and vd is connected by the leaf-vertices of the DAG. Lines 7{17 traverse the DAG in a breath-rst search
manner and update the temp mrt for each node as in Algorithm 1. After updating a vertex, the resulting
temp mrt is tested to see if the requirement, Pr(temp mrt (G) > c) , is met. Line 19, then decreases the
retiming value of any vertex v that violates the requirement unless the vertex has previously been retimed
in the current iteration. The algorithm then repeats the above process using the retimed graph obtained
from the previous iteration. If the algorithm nds the solution for a given clock period, the nal retimed
graph implies the number of required resources to achieve such a schedule length.
Line 19 pushes a dependency distance onto all incoming edges of a node that violates the timing con-
straint. Since all descendents of this node will also be retimed, Line 19 in essence moves a dependency
distance from below vd to above this node. In other words, all nodes from u to vd . Hence
only the incoming edges of vertex u will have an additional dependency distance. Once no nodes are retimed
in the current iteration, the requirement Pr(mrt(v s ; vd ) > c) is met. The algorithm stops and reports
the resulting retiming functions associated with nodes in the graph. If this requirement is not met, the
algorithm repeats at most jVj times. Since the computation of the maximum reaching time is performed in
every iteration, the time complexity of this algorithm is O(c 2 jVjjEj) while the space complexity remains the
same as in the maximum reaching time algorithm. The resulting retiming function returned by Algorithm 2
guarantees (necessary condition) the following:
Theorem 3.1 Given desired cycle period c, and a condence probability
if Algorithm 2 (probabilistic retiming algorithm) nds a solution then the resulting retimed graph G r
satises the requirement Pr(mrt(G) c) .
3.3 Example
Consider the PG and the probability distribution associated with nodes in the graph in Figure 7. For
this experiment, let 6 be the desired longest path computation time and 0:2 be the acceptable
probability. Algorithm 2 works by rst checking and computing the mrt from v s to A and E. Then, it
topologically calculates the mrt of the adjacent nodes of A and E. After it computes the mrt of node I,
Three iterations of Algorithm 2 which computes the results of the maximum reaching time from v s to v
including vd are tabulated in Tables 1{3. After the rst iteration, the retiming value associated with nodes
D, F, H, and I are shown in Column r(v) of Table 1. The values in Columns 2{8 show the probability
s
Algorithm 2 Probabilistic retiming
Require: probabilistic graph, a requirement Pr(temp mrt (G) > c)
Ensure: retiming function r for each node to meet the requirement
1: 8 node v 2 V initialize retiming function r(v) to 0
2: for do
3: retime graph G r with the retiming function r(v)
4: G directed acyclic portion (DAG) of G r
5: prepend dummy node v s to G 0 fconnects to all root nodesg
append dummy node vd to G 0 fconnected by all leaf nodesg
7: for all nodes in G 0 do
8: temp mrt (v s
9: insert v s into Queue
timing of two dummies to zerog
11: end for
12: while Queue 6= ; do
13: get node u from the Queue
14: temp mrt (v s fadding two random variablesg
15: for all u e
do
decrement number of incoming degrees of node v by one
fmaximizing two random variablesg
and u has not been retimed then
dependency distance from all outgoing edges to all incoming edgesg
20: end if
21: if number of incoming edges of node v is 0 then
22: insert node v into a ready Queue
23: end if
24: end for
25: end while
26: end for
I
G
F
A
(a)
Time Nodes
(b)
Figure
7: An example of a 9-node graph and its corresponding probabilistic timing information.
s
that the mrt(v s ; v), 8v 2 V, ranges from 1 to 6 and greater than 6 (> respectively. The retimed graph
associated with the retiming value in Table 1 after the rst iteration is presented in Figure 8(a). Table 2
presents the maximum reaching time from the dummy node v s to each node v 2 V as well as the retiming
function for each vertex after the second iteration. Figure 8(b) presents the retimed graph corresponding to
the retiming function presented in Table 2. By computing the mrt(v s ; v) of the retimed graph in Figure 8(b),
it becomes apparent that nodes B and C need to be retimed. Figure 8(c) illustrates the nal retimed graph
in accordance with the retiming function presented in Table 3. Note that Table 3 also presents the nal
maximum reaching time and retiming value for each vertex which satises the required conguration. From
this nal retimed graph, one could, therefore, allocate a minimum of ve processing elements in order to
compute the graph in six time units with 80% condence.
I
G
F
A
(a)
I
G
F
A
(b)
I
G
F
A
(c)
Figure
8: Retimed graph corresponding to Tables 1{3.
I
Table
1: First iteration showing probability distributions of mrt(v s ;
Resource-constrained scheduling
In this section, we present a probabilistic scheduling algorithm which considers an environment where there
are a limited number of resources. The traditional rotation scheduling framework is extended to handle the
probabilistic environment. We call this algorithm probabilistic rotation scheduling (PRS). Given a PG,
the algorithm iteratively optimizes the PG with respect to the condence probability and the number of
resources.
Before presenting this algorithm, we rst discuss two important concepts that make scheduling under the
probabilistic environment dierent from traditional scheduling problems. First, in the probabilistic model a
I
Table
2: Second iteration showing probability distribution of mrt(v s ;
I
Table
3: Third iteration showing probability distribution of mrt(v s ;
synchronization control step is not available. A node can begin its execution if all of its parents have already
been executed. This is similar to the asynchronous model where data request and handshaking signals are
used to communicate between nodes. The schedule can be viewed as a directed graph where edges show
either the data requirement to execute a node or the order that a node can be executed in a particular
functional unit. Note that a synchronization will be applied at the end of each iteration.Second, the task
re-mapping strategy for PRS should take the probabilistic nature of the problem into account. The following
subsections discuss these concepts in more details.
4.1 Schedule length subject to the condence
The concept of mrt can be used to compute the underlying schedule length. Hence, the conventional way
of calculating schedule length has to be redened to include the mrt notion. In order to do so, we update
the probabilistic data
ow graph by adding the resource information and extra edges between two nodes
executed consecutively in the same functional unit and have no data dependencies between them. This
graph, called the probabilistic task-assignment graph (PTG), represents a schedule under the probabilistic
model.
Denition 4.1 A probabilistic task-assignment graph T; bi; is a vertex-weighted, edge-
weighted, directed acyclic graph, where V is the set of vertices representing tasks, E is the set of edges
representing the data dependencies between vertices, w is a edge-type function from e 2 E to f0; 1g,
where 0 represents the type of dependency edge and 1 represents the type of
a random variable representing the computation time of a node v 2 V, and b is a processor binding
s
function from v 2 V to fPE processing element i and n is the total number
of processing elements.
A
Figure
9: An example of a probabilistic task-assignment graph (PTG) where the nodes are assigned to PE 0
and PE 1 .
As an example, Figure 9 shows an example of the PTG with two functional units PE 0 and PE 1 . Nodes
B and D are assigned to PE 0 . That is Edges consists
of C
here that if there exists
edges all of the nodes are scheduled to the same processor, edge A ! D which was
a true dependence edge can be ignored. Note also that removing redundancy edges is simple and should
be utilized to speed up the calculation of mrt. In Figure 9, edge C e1
A is control-typed since A now has
no dependency to C but has to execute after C due to resource constraints. Other edges represent data
dependencies. Applying the mrt algorithm to the PTG, we can dene the probabilistic schedule length.
This length is expressed in terms of condence probability as follows.
Denition 4.2 A probabilistic schedule length of PTG with respect to a condence level
psl(G; ), is the smallest longest path computation time c such that Pr(mrt(G) > c) < 1 - .
For example, consider the probability distribution of the mrt(G):
Possible computation time
Prob.
Given a condence probability 0:8, the probabilistic schedule length psl(G; 0:8) is 14. This is because the
smallest possible computation time is 14 where Pr(mrt(G) > 14) < 0:2; i.e., 0:04365
0:07818 < 0:2. Therefore with above 80% condence, the computation time of G is less than 14.
4.2 Task re-mapping heuristic: template scheduling
In this subsection we propose a heuristic, called template scheduling (TS), to search for a place to re-schedule
a task. This re-mapping phase plays an important role in reducing the probabilistic schedule length in PRS.
Since the computation time is a random variable, there is no xed control step within an iteration. As long
as a node is placed after its parents, any scheduling location is legal.
In template scheduling, a schedule template is computed using the expectation of the computation time
of each node. This template implies not only the execution order, but also the expected control step that a
node can start execution. In order to determine an expected control step, each node in a PTG is visited in
the topological order and the following is computed.
s
Denition 4.3 The expected control step of node v of PTG computed by:
e
represents the expected computation time of node
This denition assumes node v can start its execution right after all parents nish their execution. By
observing this template, one can ascertain how long (the number of control steps) each processing element
would be idle. The template scheduling decides where to re-schedule a node using \their degree of
exibility".
Denition 4.4 Given a PTG a degree of
exibility of node u with respect to the
processing element PE i , d
ex(u; i), is computed by: d
and u and v are assigned to PE i .
The degree of
exibility conveys the expected size of available time slot within PE i . Figure 10 shows a typical
case where node v has more than one parent. u 1 , u 2 and u 3 are parents of node v and each of these parents
Figure
10: An example of how to obtain the Expected control step.
has the expected computation time 1; 4; and 3 respectively. In the same order, the expected control steps
of these nodes are 3; 4:7; and 3:7 respectively. Therefore, the expected control step According
to Denition 4.3, the degree of
exibility of u with respect to PE 0 , is 8:7 - 3 This value conveys
how long PE 0 has to wait before v can be executed. Note that the degree of
exibility of a node, which is
executed last in any PE, is undened. The following steps compute the new G after rescheduling node v.
Algorithm 3 Rescheduling rotated nodes using the template scheduling heuristic
Require: PTG, rescheduled node v, and condence probability
Ensure: Resulting new PTG with shortest psl
1: Assume that all nodes in the PTG have their expected computation times pre-computed
2: 8 node u 2 V compute Ecs(u) and d
ex(u)
3: for each of target processors (PE i ) do
4: Using the maximum dlfex to select node x which is scheduled to PE i
5: schedule v after x
reconstruct a new PTG (associated with PE i ) with this assignment
7: compare it with others PTG and get the one that has the best psl
8: end for
This rescheduling policy hopes that placing a node in the processor with the expected biggest idle time
slot results in the least potential of increasing the total execution time. If a computation time of the node is
s
much smaller than the expected time slot, this approach may allow the next rescheduled node to be placed
here also. This is similar to the worst-t policy where the scheduler strives to schedule a node to the biggest
slot. In Section 5, we demonstrate the eectiveness of this heuristic over the method that exhaustively nds
the best place for a node. Note that this exhaustive search is not performed globally, rather the search is
done locally in each re-mapping iteration. We call this heuristic a local search (LS).
4.3 Rotation phase
Having discussed the rescheduling heuristic, the following presents the probabilistic rotation scheduling
(PRS). Note that the previous heuristic or any rescheduling heuristic can be used as rescheduling part of
this PRS algorithm. The experiments in Section 5 show the ecacy of the PRS framework with dierent
rescheduling heuristics.
Algorithm 4 Probabilistic Rotation Scheduling
Require: PG and designer's condence probability
Ensure: PTG with a shortest psl
1:
2: G s ( nd initial schedule fnding an initial schedule for PG and keep it in G s g
3: for do
4: R ( all roots of a DAG portion of G s fthese are nodes to be rotatedg
5: retime each of the nodes in R
reschedule these node one by one using the heuristic previously presented
7: compute psl of the new graph with respect to
8: if psl(G
9: G best ( G s fconsidering that G best is initialized to G s rstg
11: end for
In order to use template scheduling, an expected computation time of each task will be precomputed.
After that, an initial schedule is constructed by nd initial schedule. Note that the algorithm for creating the
initial schedule can be any DAG scheduling, e.g., probabilistic list scheduling discussed previously. Rotation
scheduling loops for 2jV j
times to reschedule all nodes in the graph at least once. Like traditional rotation
scheduling, only nodes that have all their incoming edges with non-zero dependency distances are selected
to be rescheduled. One dependency distance will be drawn from each of these edges and placed on their
outgoing edges. Then these rotated nodes will be rescheduled one by one using the template scheduling
technique. After all rotated nodes are scheduled, if the resulting PTG is better than the current one,
Algorithm 4 will save the better PTG.
4.4 Example
Let us revisit the PG example in Section 3.3 as shown in Figure 11(a) and the corresponding computation
time in Figure 11(b). The condence probability is given as list scheduling is applied, the
initial execution order is determined as shown in Figure 12(a). The corresponding PTG is presented in
s
Figure
12(b). Nodes A; B; H and I are assigned to PE 0 , nodes E and F are scheduled PE 1 and nodes C; G
and D are assigned to PE 2 . Edges B e
G and G e
! D are
ow-control edges.
I
G
F
A
(a)
Time Nodes
(b)
Figure
11: An example of the computation time of graph in Figure 1(b).
For this assignment, the mrt of such a PTG is computed as following:
Possible computation time
Therefore, with higher than 80% condence probability, psl(G;
According to the structure of the PTG, either A or E can be rescheduled. In the rst rotation, PRS
selects A to be rescheduled. One dependency distance is moved from all incoming edges of A and pushed
to all outgoing edges of A. The resulting retimed graph PG is shown in Figure 13(a). In this graph, node
A requires no direct data dependency from any node. Therefore, A can be placed at any position in the
schedule. Figure 13(b) shows the expected computation time, the expected control step, and the degree of
exibility of each node in this PTG.
Based on the values in the table from Figure 13(b), it is obvious that an expected waiting time between B
and H in PE 0 would be 2.2 units. The template scheduling, however, decides to place A in a position between
B and H in PE 0 where the psl can be reduced. The resulting PTG and its execution order are shown in
Figure
14 where psl(G; running PRS for iterations, the shortest possible schedule length
was found in the 15 th iteration. In Figure 15, we present the resulting schedule length of the this trial which
is less than 9 with probability greater than 80% (psl(G;
(a) Static execution order
I
A
(b) PTG
Figure
12: Initial assignment and the corresponding execution order.
G
F
A
(a) New PG
d
ex
(b) Ecs and d
ex
Figure
13: The probabilistic graph after A is rotated and the template values.
I
G
A
(a) PTG
(b) Execution order
Figure
14: The PTG, execution order and its mrt after the rst rotation where psl(G;
I
G
F
A
(a) PG
I
G
F
A
(b) PTG
(c) Final execution order
Figure
15: The nal PG, PTG, execution order where psl(G; 9.
5 Experimental Results
In this section we perform experiments both using non-resource and resource constrained scheduling on
two general classes of problems. The rst class are real applications which may have a combination of
nodes with probabilistic computation times and with xed computation times. The second are well known
DSP lter benchmarks. Since these benchmarks contain two uniform types of nodes, namely multiplication
and addition, the basic timing information consisting of three probability distribution are assigned to each
benchmark graphs. In order to show the usability of the proposed algorithm, three applications are proled
to get their probabilistic timing information. The proler reports the processing time requirement in these
applications and the corresponding frequency of this time value. The frequency of timing occurances is
used to obtain a node probability distributions. A node in these graphs may represent a large number
of operations which cause the uncertain computation time as well as operations which have xed timing
information. Each timing information is discretized to a smaller unit such as nanoseconds.
The DSP lter benchmarks used in these experiments include a Biquadratic IIR lter, a 3-stage IIR lter,
a 4 th -order Jaunmann wave digital lter, a 5 th -order elliptic lter, an unfolded 5 th -order elliptic lter with
an unfolding factor equal to 4 4), an all-pole lattice lter, an unfolded all-pole lattice lter
an unfolded all-pole lattice lter dierential equation solver and a Volterra lter. The rest of
the benchmarks are the application for image processing (Floyd-Steinberg), the application to search for a
solution which maximize some unknown function by using genetic algorithm, and the famous example of the
application in the fuzzy logic area, the inverted pendulum problem. All of the experiments were performed
using SUN UltraSparc (TM) .
5.1 Non-resource constrained experiments
In each experiment, for a given condence level is used to search for the best longest
path computation time. In order to do this, the current desired longest path computation time (c) is varied
based on whether or not a feasible solution is found. For instance, if c is too small, the algorithm will report
that no feasible solution exists. In this case, c is increased and Algorithm 2 is re-applied. This process will
repeated, until the smallest feasible c is found.
Table
4 shows the results for traditional retiming using worst-case computation time assumptions (column
c worst) and the probabilistic model with two high condence probabilities . The average
running time for these experiments was determined to be below seconds including the input/output
interfaces. The algorithms are implemented in a straightforward way where array is used to store probability
distributions. Column 3 in the table presents the optimized longest path computation times obtained from
applying traditional retiming using the worst-case computation time for each node in the graph benchmarks.
For columns where 0:8, the probabilistic retiming algorithm is applied to the benchmarks
while each of these condence probabilities is used as its input. The numbers show in both columns are
the given c where Pr(mrt(G) c) . The value c from this requirement is the smallest input value which
Algorithm 2 can nd a solution to satisfy such a requirement. Notice that for all benchmarks the longest
path computation time with are still smaller than the computation time in Column 3. In order
to quantify the improvement of the probabilistic retiming algorithm, the \%" columns list the percent of
computation time reductions with respect to the value from Column 3.
Benchmark # nodes c worst c % c %
Biquad IIR 8 78
Di. Equation 11 118 81 31 77 35
3-stage direct IIR 12 54 44 19 41 24
All-pole Lattice 15 157 120 24 117 25
4 th order WDF 17 156 116 26 112 28
Volterra
5 th Elliptic 34 330 240 28 236 29
All-pole Lattice (uf=2)
All-pole Lattice (uf=6) 105 1092 811 26 806 26
5 th Elliptic (uf=4) 170 1633 1185 27 1174 28
Genetic application
Fuzzy application 24 19 17 11 17 11
Table
4: Probabilistic retiming versus worst case traditional retiming. Average completion time for running
probabilistic retiming against these benchmarks is 53.10 seconds.
Table
5 compares the probabilistic retiming algorithm to the traditional retiming algorithm with average
computation times used for each node in the graphs. First, the probabilistic nodes of each input graph
are converted to xed time nodes resulting in G avg , i.e., each node assumes its average computation time
rather than probabilistic computation time. Traditional retiming is then applied to the resulting graph,
resulting in graph G r
The purpose of this table is to compare G r
avg (obtained from running traditional
retiming on G avg ) with retimed PGs. In order to compare with the results produced by the proposed
algorithm, the placement of dependency distance in each G r
avg is preserved while the original probabilistic
computation times are replaced with the average computation times. Put another way, we transformed
each G r
avg back to a probabilistic graph. Algorithm 1 is then used to evaluate these graphs while only the
expected values of each result are shown in the table. Columns 4 and 5 present the expected values of the
results obtained from running probabilistic retiming on each PG where the condence probability of 0:9 and
0:8 are considered. Note that these results are consistently better (smaller value) than the results obtained
from running traditional retiming on each of G avg . Hence, the approach of using the expected values for
each node is neither a good heuristic in the initial design phase nor does it give any quantitative condence
to the resulting graphs.
5.2 Resource-constrained experiments
We tested the probabilistic rotation scheduling (PRS) algorithm on the selected lter and application bench-
marks: the 5 th elliptic lter, 3 stage-IIR lter, Volterra lter, Lattice lter, and Floyd-Steinberg, Genetic
algorithm, Fuzzy logic applications. Table 6 demonstrates the eectiveness of our approach on both 2-adder,
1-multiplier and 2-adder, 2-multiplier systems for those lter benchmarks. The specication of 3 and 4
general purpose processors (PEs) are adopted for the other three application benchmarks. The performance
of PRS is evaluated by comparing the resulting schedule length with the schedule length obtained from
s
Traditional Algorithm 2
Benchmark mrt(G r
avg
Biquad IIR 70.40 52.64 52.30
Di. Equation 76.05 73.07 72.50
3-stage direct IIR 41.90 37.70 38.36
All-pole Lattice 114.45 111.77 111.40
4 th order WDF 106.73 106.44 105.98
Volterra 204.00 202.44 202.00
5 th Elliptic 233.30 228.41 227.59
All-pole Lattice (uf=2) 342.17 338.11 337.62
All-pole Lattice (uf=6) 800.51 794.02 793.39
5 th Elliptic (uf=4) 800.51 794.02 793.39
Genetic application 150.89 144.01 112.46
Fuzzy application 18.03 16.08 16.08
Table
5: Probabilistic retiming versus average-case analysis.
the modied list scheduling technique (capable of handling the probabilistic graphs). We also show the
eectiveness of template scheduling (TS) by comparing its results with other heuristics, namely, local search
(LS), and as-late-as possible scheduling (AL). The average execution times of AL and TS are very comparable
(about 12 seconds running on UltraSparc (TM) ) while LS takes much longer time and does not give the
outstanding results comparing with those from TS.
In each rescheduling phase of PRS, the LS approach strives to reschedule a node to all possible legal
location (local search) and returns the assignment which yields the minimum psl(G; ). This method is simple
and gives a good schedule; however, it is time consuming and not practical to try all possible scheduling
places in every iteration of PRS. Furthermore, a PTG needs to be temporarily updated in every trial in
order to compute the possible schedule length. On the contrary, the AL method reduces the number of trials
by attempting to schedule a task only once at the farthest legal position in each functional unit or processor
while the TS heuristic re-maps the scheduled node after the node with the highest degree of
exibility in
each functional unit.
Columns show the results when considering the probabilistic situations with
condence probabilities 0.8 and 0.9. Column \PL" presents the probabilistic schedule length (psl) after
modied list scheduling is applied to the benchmarks. Columns \LS", \AL", and \TS" show the resulting
psl, after running PRS against those benchmarks using the re-mapping heuristics LS, AL and TS respectively.
Among these three heuristics, the TS scheme produces better results than AL which uses the simplest criteria.
Further, it yields as good as or sometimes even better results than given by the LS approach, while TS taking
less time to select a re-scheduled position for a node. This is because in each iteration the LS method nds
the local optimal place. However, scheduling nodes to these positions does not always result in the global
optimal schedule length.
In
Table
7, based on the system that has 2 adders and 1 multiplier (for lter benchmarks) and 3PEs (for
application benchmarks), we present the comparison results obtained from applying the following techniques
to the benchmarks: modied list scheduling, traditional rotation scheduling, probabilistic rotation scheduling
s
Spec. Benchmarks #nodes
PL PRS PL PRS
Di. Equation 11 169 152 133 133 165 147 131 131
Adds. 3-stage IIR 12 188 184 151 151 184 179 147 147
All-pole Lattice 15 229 225 142 141 225 220 138 138
Volterra
5 th Elliptic 34 318 298 293 293 314 294 289 289
28 28 27
Genetic application
Fuzzy application 24 52 46 43 43
Di. Equation 11 120 103 83 90 117 100 83 91
Adds. 3-stage IIR 12 124 120 87 87 120 110 83 82
Muls. All-pole Lattice 15 229 225 140 139 225 220 136 136
Volterra
5 th Elliptic 34 288 288 274 271 284 274 270 267
26 28 38 24 24 24
Genetic application
Fuzzy application 24
Table
Comparison of the results obtained from applying the following benchmarks: modied list, and
probabilistic rotation scheduling (using dierent re-mapping heuristics). Average completion time for running
AL, LS, and TS heuristics against these benchmarks are 11.96, 42.58, and 12.46 seconds respectively.
s
using template scheduling heuristic, and traditional rotation scheduling considering average computation
times. Columns \L" and \R" show the schedule length obtained from applying modied list scheduling and
traditional rotation scheduling respectively to the benchmarks where all probabilistic computation times are
converted into their worst-case computation times. Obviously, considering the probabilistic case gives the
signicant improvement of the schedule length over the worst case scenario.
Column \PL" presents the initial schedule lengths obtained from using the modied list scheduling ap-
proach. The results in column \PRS" are obtained from Table 6 (PRS using template scheduling heuristic).
In column \AVG", the psls are computed by using the graphs (PTGs) retrieved from running traditional
rotation to the benchmarks where the average computation time is assigned to each node. These results
demonstrate that considering the probabilistic situation while performing rotation scheduling can consistently
give better schedules than considering only worst-case or average-case computation times.
Spec. Benchmarks #nodes worst case
Di. Equation 11 228 180 169 133 136 165 131 131
All-pole Lattice 15 312 204 229 141 153 225 138 149
Volterra
5 th Elliptic 34 438 396 318 293 299 314 289 294
Genetic application
Fuzzy application 24 69 55 52 45 66 52 43 63
Table
7: Comparing probabilistic rotation with traditional rotation running on graphs with average computation
times.
6 Conclusion
We have presented scheduling and optimization algorithms which operate in probabilistic environments.
A probabilistic data-ow
graph is used to model an application which takes this probabilistic nature into
account. The probabilistic retiming algorithm is used to optimize the given application when non-resource
constrained environments are assumed. Given an acceptable probability and a desired longest path computation
time, the algorithm reduces the computation time of the given probabilistic graph to the desired
value. The concept of maximum reaching time is used to calculate timing values of the probabilistic graph.
When a limited number of processing elements is considered, the probabilistic rotation scheduling algorithm
(where the probabilistic concept and loop pipelining are integrated to optimize a task schedule) is proposed.
Based on the maximum reaching time notion, the probabilistic schedule length is used to measure the total
computation time of these tasks being scheduled in one iteration. Given a probabilistic graph, the schedule
is constructed by using the task-assignment probabilistic graph, and the probabilistic schedule length is
computed with respect to a given condence probability . Probabilistic rotation scheduling is applied to
the initial schedule in order to optimize the schedule. It produces the best optimized schedule with respect to
the condence probability. The re-mapping heuristic, template scheduling, is incorporated in the algorithm
in order to nd the scheduling position for each node.
s
--R
Development environment for horizontal microcode.
Optimal loop parallelization.
Unimodular transformations of double loops.
Impact: an architectural framework for multiple instruction issue processor.
Rotation scheduling: A loop pipelining algorithm.
Static scheduling for synthesis of DSP algorithms on various models.
Stage scheduling: a technique to reduce the register requirements of a modulo schedule.
Minimum register requirements for a modulo schedule.
Trace scheduling: a technique for global microcode compaction.
Designing and building parallel program: concepts and tools for parallel software engi- neering
Dynamic list-scheduling with nite resources
Retiming synchronous circuitry with imprecise delays.
A comparison of multiprocessor scheduling heuristics.
Relative scheduling under timing constraints: Algorithm for high-level synthesis
Software pipelining.
Retiming synchronous circuitry.
The art of parallel programming.
A singular loop transformation framework based on non-singular matrices
Static rate-optimal scheduling of iterative data-ow programs via optimum unfolding
Loop pipelining for scheduling multi-dimensional systems via rotation
Some scheduling techniques and an easily schedulable horizontal architecture for high performance scienti
Iterative modulo scheduling: An algorithm for software pipelining loops.
High Performance Compilers for Parallel Computing
A loop transformation theory and an algorithm to maximize parallelism.
Fuzzy sets as a basis for a theory of possibility.
--TR
--CTR
Meikang Qiu , Zhiping Jia , Chun Xue , Zili Shao , Edwin H.-M. Sha, Voltage Assignment with Guaranteed Probability Satisfying Timing Constraint for Real-time Multiproceesor DSP, Journal of VLSI Signal Processing Systems, v.46 n.1, p.55-73, January 2007
Jose L. Aguilar , Ernst L. Leiss, Data dependent loop scheduling based on genetic algorithms for distributed and shared memory systems, Journal of Parallel and Distributed Computing, v.64 n.5, p.578-590, May 2004
Rehab F. Abdel-Kader, Resource-constrained loop scheduling in high-level synthesis, Proceedings of the 43rd annual southeast regional conference, March 18-20, 2005, Kennesaw, Georgia | retiming;probabilistic approach;loop pipelining;scheduling;rotation scheduling |
330348 | Modules via Concept Analysis. | AbstractWe describe a general technique for identifying modules in legacy code. The method is based on concept analysisa branch of lattice theory that can be used to identify similarities among a set of objects based on their attributes. We discuss how concept analysis can identify potential modules using both positive and negative information. We present an algorithmic framework to construct a lattice of concepts from a program, where each concept represents a potential module. We define the notion of a concept partition, present an algorithm for discovering all concept partitions of a given concept lattice, and prove the algorithm correct. | Introduction
Many existing software systems were developed using programming languages and paradigms that
do not incorporate object-oriented features and design principles. These systems often have a
monolithic style that makes maintenance and further enhancement an arduous task. The software
engineer's job would be less difficult if there were tools that could transform code that does not
make explicit use of modules into functionally equivalent object-oriented code that does make use
of modules (or classes). Given a tool to (partially) automate such a transformation, legacy systems
could be modernized, making them easier to maintain. The modularization of programs offers the
added benefit of increased opportunity for code reuse.
The major difficulty with software modularization is the accurate identification of potential
modules and classes. This paper describes how a technique known as concept analysis can help
automate modularization. The main contributions of this paper are:
ffl We show how to apply concept analysis to the modularization problem.
ffl Previous work on the modularization problem has made use only of "positive" information:
Modules are identified based on properties such as "function f uses variable x" or "f has
an argument of type t". It is sometimes the case that a module can be identified by what
values or types it does not depend upon - for example, "function f uses the fields of struct
queue, but not the fields of struct stack". Concept analysis allows both positive and
negative information to be incorporated into a modularization criterion. (See Section 3.2.)
ffl Unlike several previously proposed techniques, the concept-analysis approach offers the ability
to "stay within the system" (as opposed to applying ad hoc methods) when the first suggested
modularization is judged to be inadequate:
- If the proposed modularization is on too fine a scale, the user can "move up" the partition
lattice. (See Section 4.)
- If the proposed modularization is too coarse, the user can add additional attributes to
identify concepts. (See Section 3.)
ffl We have implemented a prototype tool that uses concept analysis to propose modularizations
of C programs. The implementation has been tested on several small and medium-sized
examples. The largest example consists of about 28,000 lines of source code. (See Section 5.)
As an example, consider the C implementation of stacks and queues shown in Figure 1. Queues
are represented by two stacks, one for the front and one for the back; information is shifted from the
front stack to the back stack when the back stack is empty. The queue functions only make use of
the stack fields indirectly - by calling the stack functions. Although the stack and queue functions
are written in an interleaved order, we would like to be able to tease the two components apart
and make them separate classes, one a client of the other, as in the C++ code given in Figure 2.
This paper discusses a technique by which modules (in this case C++ classes) can be identified in
code that does not delineate them explicitly. The resulting information can then be supplied to a
suitable transformation tool that maps C code to C++ code, as in the aforementioned example.
Although other modularization algorithms are able to identify the same decomposition [2, 14], they
are unable to handle a variant of this example in which stack and queue are more tightly intertwined
(see Section 3.2). In Section 3.2, we show that concept analysis is able to group the code from the
latter example into separate queue and stack modules.
Section 2 introduces contexts and concept analysis, and an algorithm for building concept
lattices from contexts. Section 3 discusses a process for identifying modules in C programs based
on concept analysis. Section 4 defines the notion of a concept partition and presents an algorithm
for finding the partitions of a concept lattice. Section 5 discusses the implementation and results.
Section 6 concerns related work.
define
struct stack f int* base; int* sp; int size;
struct queue f struct stack* front; struct stack* back;
struct stack* initStack(int sz)
f struct stack* (struct stack*)malloc(sizeof(struct stack));
return s; g
struct queue* initQ()
f struct queue* (struct queue*)malloc(sizeof(struct queue));
return q; g
int isEmptyStack(struct stack* s) f return (s-?sp == s-?base); g
int isEmptyQ(struct queue* q)
f return (isEmptyStack(q-?front) && isEmptyStack(q-?back)); g
void push(struct stack* s, int i)
no overflow check */
void enq(struct queue* q, int i) f push(q-?front, i); g
int pop(struct stack* s)
f if (isEmptyStack(s))
return -1;
return (*(s-?sp)); g
int deq(struct queue* q)
f if (isEmptyQ(q))
return -1;
if (isEmptyStack(q-?back))
push(q-?back, pop(q-?front));
return pop(q-?back); g
Figure
1: C code to implement a queue using two stacks.
define
class stack f
private:
int* base;
int
public:
stack(int new
int isEmpty() f return (sp == base); g
int pop() f
if (isEmpty())
return -1;
return (*sp);
void push(int i) f no overflow check
class queue f
private:
stack *front, *back;
public:
new stack(QUEUE SIZE); back = new stack(QUEUE SIZE); g
int isEmpty() f return (front-?isEmpty() && back-?isEmpty()); g
int deq() f
if (isEmpty())
return -1;
if (back-?isEmpty())
return back-?pop();
void enq(int i) f front-?push(i); g
Figure
2: Queue and stack classes in C++.
Concept analysis provides a way to identify sensible groupings of objects that have common attributes
To illustrate concept analysis, we consider the example of a crude classification of a group of
mammals: cats, chimpanzees, dogs, dolphins, humans, and whales. Suppose we consider five at-
tributes: four-legged, hair-covered, intelligent, marine, and thumbed. Table 1 shows which animals
are considered to have which attributes.
attributes
four-legged hair-covered intelligent marine thumbed
cats
chimpanzees
objects dogs
dolphins
humans
whales
Table
1: A crude characterization of mammals.
In order to understand the basics of concept analysis, a few definitions are required. We follow
the presentation in [11]. A context is a triple O and A are finite sets (the
objects and attributes, respectively), and R is a binary relation between O and A. In the mammal
example, the objects are the different kinds of mammals, the attributes are the characteristics
four-legged, hair-covered, etc. The binary relation R is given in Table 1. For example, the tuple
(whales, marine) is in R, but (cats, intelligent) is not.
A. The mappings
attributes of X) and -(Y (the common objects of Y ) form a Galois
connection. That is, the mappings satisfy:
and
(The mappings are antimonotone and extensive.) In the mammal example, oe(fcats;
fhair-coveredg and
A concept is a pair of sets - a set of objects (the extent) and a set of attributes (the intent)
That is, a concept is a maximal collection of objects
sharing common attributes. In the example, (fcats, dogsg, ffour-legged, hair-coveredg) is a concept,
whereas (fcats, chimpanzeesg, fhair-coveredg) is not a concept. A concept (X 0
) is a subconcept
of concept (X 1
(or, equivalently, Y 1
). For instance, (fdolphins, whalesg,
fintelligent, marineg) is a subconcept of (fchimpanzees, dolphins, humans, whalesg, fintelligentg).
The subconcept relation forms a complete partial order (the concept lattice) over the set of concepts.
The concept lattice for the mammal example is shown in Figure 3.
top (fcats; chimpanzees; dogs; dolphins; humans; whalesg; ;)
(fchimpanzees; dolphins; humans; whalesg; fintelligentg)
(fcats; chimpanzees; dogsg; fhair-coveredg)
(fchimpanzees; humansg; fintelligent; thumbedg)
(fdolphins, whalesg; fintelligent; marineg)
(fcats; dogsg; fhair-covered; four-leggedg)
bot (;; ffour-legged; hair-covered; intelligent; marine; thumbedg)
Figure
3: The concept lattice (and accompanying key) for the mammal example.
The fundamental theorem for concept lattices [12] relates subconcepts and superconcepts as
follows:
G
i2I
oe
i2I
i2I
The significance of the theorem is that the least common superconcept of a set of concepts can
be computed by intersecting their intents, and by taking the union of the extents and then finding
the common objects of the set of common attributes of that resulting union. An example of the
application of the fundamental theorem is as follows:
This computation corresponds to the fact that c 1 t c in the lattice shown in Figure 3.
There are several algorithms for computing a concept lattice from a given context [11]. We
describe a simple bottom-up algorithm here.
An important fact about concepts and contexts used in the algorithm is that, given a set of
objects X, the smallest concept with extent containing X is (-(oe(X)); oe(X)). Thus, the bottom
element of the concept lattice is (-(oe(;); oe(;)) - the concept consisting of all those objects that
have all the attributes (often the empty set, as in our example).
The initial step of the algorithm is to compute the bottom of the concept lattice. The next
step is to compute atomic concepts - smallest concepts with extent containing each of the objects
treated as a singleton set. The atomic concepts correspond to those nodes in the concept lattice
reachable from the bottom node in one step. Computation of some of the atomic concepts for the
mammal example is shown below:
The algorithm then closes the set of atomic concepts under join: Initially, a worklist is formed
containing all pairs of atomic concepts c) such that c 6- c 0 and c 0 6- c. While the worklist is not
empty, remove an element of the worklist (c 0
using the fundamental
theorem of concept analysis. If c 00 is a concept that is yet to be discovered then add all pairs of
concepts c) such that c 6- c 00 and c 00 6- c to the worklist. The process is repeated until the
worklist is empty.
The iterative step of the concept-building algorithm is illustrated below:
3 Using Concept Analysis to Identify Potential Modules
The main idea of this paper is to apply concept analysis to the problem of identifying potential
modules within monolithic code. An outline of the process is as follows:
1. Build a context, where objects are functions defined in the input program and attributes are
properties of those functions. The attributes could be any of several properties relating the
functions to data structures. Attributes are discussed in more detail below.
2. Construct the concept lattice from the context, as described in Section 2.
3. partitions - collections of concepts whose extents partition the set of ob-
jects. Each concept partition corresponds to a possible modularization of the input program.
Concept partitions are discussed in Section 4.
3.1 Applying concept analysis to the stack and queue example
Consider the stack and queue example from the introduction. In this section, we will demonstrate
how concept analysis can be used to identify the module partition indicated by the C++ code in
Figure
(page 4).
First, we define a context. Let the objects be ' 0
, and the attributes be ff 0
where the ' i
's and ff i
's correspond to functions and properties of functions as indicated by the
tables below:
return type is struct stack *
return type is struct initQ *
has argument of type struct stack *
has argument of type struct queue *
uses fields of struct stack
uses fields of struct queue
The context relation for the stack and queue example is then:
The next step is to build the concept lattice from the context, as described in Section 2. The
concept lattice for the stack and queue example, together with a key, identifying lattice-node labels
with corresponding concepts, is shown below:
bot (;; fff 0
g) empty concept
One of the advantages of using concept analysis is that multiple possibilities for modularization
are offered. In addition, the relationships among concepts in the concept lattice also offers insight
into the structure within proposed modules. For example, at the atomic level, initialization functions
(concepts c 0
and c 2
are distinct concepts from other functions (concepts c 1
and c 3
). The
former two concepts correspond to constructors and the latter two to member functions. Concept
corresponds to a stack module and c 5
corresponds to a queue module. The subconcept relationships
indicate that the stack concept consists of a constructor concept and a
member-function concept.
3.2 Adding complementary attributes to "untangle" code
The stack and queue example, as considered thus far, has not demonstrated the full power that
concept analysis brings to the modularization problem. It is relatively straightforward to separate
the code shown in Figure 1 into two modules, and techniques such as those described in [2, 14]
will also create the same grouping. In essence, the concept analysis described above emulates
these techniques. This shows that concept analysis encompasses previously defined methods for
modularization. We now show that concept analysis offers the possibility to go beyond previously
defined methods: It offers the ability to tease apart code that is, in some sense, more "tangled".
To illustrate what we mean by more tangled code, consider a slightly modified stack and queue
example. Suppose the functions isEmptyQ and enq have been written so that they modify the stack
fields directly (see Figure 4), rather than calling isEmptyStack and push. While this may be more
efficient, it makes the code more difficult to maintain - simple changes in the stack implementation
may require changes in the queue code. Furthermore, it complicates the process of identifying
separate modules. If we apply concept analysis using the same set of attributes as we did above,
attribute ff 4
("uses fields of struct stack") now applies to isEmptyQ and enq. Table 2 shows the
context relation for the tangled stack and queue code with the original sets of objects and attributes.
The resulting concept lattice is shown in Figure 5. Observe that concept c 5
can still be identified
with a queue module, but none of the concepts coincide with a stack module. In particular, even
though the extent of c 0
is finitStackg and the extent of c 2
is fisEmptyStack; push; popg, the concept
is not the stack concept: c 7
consists of initStack, isEmptyStack, isEmptyQ, push,
enq, and pop, which mixes the stack operations with some, but not all, of the queue operations.
int isEmptyQ(struct queue* q) f
return (q-?front-?sp == q-?front-?base && q-?back-?sp == q-?back-?base);
void enq(struct queue* q, int i) f
Figure
4: The queue and stack example revisited: "Tangled" C code.
Table
2: The context relation for the "tangled" stack and queue example.
top (f' 0
c 7
g) queue concept
c 6
bot (;; fff
Figure
5: The concept lattice (and corresponding key) for the "tangled" stack and queue example
using the attributes listed on page 8.
The problem is that the attributes listed on page 8 reflect only "positive" information. A
distinguishing characteristic of the stack operations is that they depend on the fields of struct
stack but not on the fields of struct queue. To "untangle" these components, we need to augment
the set of attributes with "negative" information - in this case the complement of "uses fields of
struct queue" (i.e., "does not use fields of struct queue"). The revised set of attributes and the
corresponding context relation are shown below:
return type is struct stack *
return type is struct initQ *
has argument of type struct stack *
ff 3 has argument of type struct queue *
uses fields of struct stack
uses fields of struct queue
ff 6 does not use fields of struct queue
ff 6
The resulting concept lattice (and corresponding key) is now:
top (f' 0
c 7
g) queue concept
stack concept
c 6
g) initStack
bot (;; fff 0
g) empty concept
This concept lattice contains all of the concepts in the concept lattice from Figure 5, as well as an
additional concept, c 4
, which corresponds to a stack module.
This modularization identifies isEmptyQ and enq as being part of a queue module that is
separate from a stack module, even though these two operations make direct use of stack fields.
This raises some issues for the subsequent C-to-C++ code-transformation phase. Although one
might be able to devise transformations to remove these dependences of queue operations on the
private members of the stack class (e.g., by introducing appropriate calls on member functions
of the stack class), a more straightforward C-to-C++ transformation would simply use the C++
friend mechanism, as shown in Figure 6.
3.3 Other choices for attributes
A concept is a maximal collection of objects having common properties. A cohesive module is a
collection of functions (perhaps along with a data structure) having common properties. Therefore,
when employing concept analysis to the modularization problem, it is reasonable to have objects
#define
class queue;
class stack f
friend class queue;
private:
int* base;
int
public:
stack(int new
int isEmpty() f return (sp == base); g
int pop() f
if (isEmpty())
return -1;
return (*sp);
void push(int i) f no overflow check
class queue f
private:
stack *front, *back;
public:
new stack(QUEUE SIZE); back = new stack(QUEUE SIZE); g
int
return (front-?sp == front-?base && back-?sp == back-?base);
void enq(int i) f
int deq() f
if (isEmpty())
return -1;
if (back-?isEmpty())
return back-?pop();
Figure
Queue and stack classes in C++ with friends.
correspond to functions. However, we have more flexibility when it comes to attributes. There are a
wide variety of attributes we might choose in an effort to identify concepts (modules) in a program.
Our examples have used attributes that reflect the way struct data types are used. But in some
instances, it may be useful to use attributes that capture other properties. Other possibilities for
attributes include the following:
ffl Variable-usage information: Related functions can sometimes be identified by their use of
common global variables. An attribute capturing this information might be of the form "uses
global variable x".
ffl Dataflow and slicing information can be useful in identifying modules. Attributes capturing
this information might be of the form "may use a value that flows from statement s" or "is
part of the slice with respect to statement s".
ffl Information obtained from type inferencing: Type inference can be used to uncover distinctions
between seemingly identical types [10, 9]. For example, if f is a function declared to
be of type int \Theta int ! bool, type inference might discover that f 's most general type is
of the form ff \Theta fi ! bool. This reveals that the type of f 's first argument is distinct from
the type of its second argument (even though they had the same declared type). Attributes
might then be of the form "has argument of type ff" rather than simply "has argument of
type int". This would prevent functions from being grouped together merely because of
superficial similarities in the declared types of their arguments.
ffl Disjunctions of attributes: The user may be aware of certain properties of the input program,
perhaps the similarity of two data structures. Disjunctive attributes allow the user to specify
properties of the form "- 1
or - 2
". For example, "uses fields of stack or uses fields of queue".
Any or all of these attributes could be used together in one context. This highlights one of the
advantages of the concept-analysis approach to modularization: It represents not just a single
algorithm for modularization; rather, it provides a framework for obtaining a collection of different
modularization algorithms.
4 Concept and Module Partitions
Thus far, we have discussed how a concept lattice can be built from a program in such a way that
concepts represent potential modules. However, because of overlaps between concepts, not every
group of concepts represents a potential modularization. Feasible modularizations are partitions:
collections of modules that are disjoint, but include all the functions in the input code. To limit
the number of choices that a software engineer would be presented with, it is helpful to identify
such partitions.
We now formalize the notion of a concept partition and present an algorithm to identify such
partitions from a concept lattice.
4.1 Concept partitions
Given a context (O; A;R), a concept partition is a set of concepts whose extents form a partition of
O. That is,
partition if and only if the extents of the
concepts cover the object set (i.e. S
O) and are pairwise disjoint (X
In terms of modularizing a program, a concept partition corresponds to a collection
of modules such that every function in the program is associated with exactly one module.
As a simple example, consider the concept lattice shown on page 11. The concept partitions
for that context are listed below:
ftopg
is the atomic partition. P 2
and P 3
are combinations of atomic concepts and larger concepts. P 4
consists of one stack module and one queue module. P 5
is the trivial partition: All functions are
placed in one module.
By looking at concept partitions, the software engineer can eliminate nonsensical possibilities.
In the preceding example, c 7
does not appear in any partition - if it did, then to what module
(i.e., nonoverlapping concept) would deq belong?
An atomic partition of a concept lattice is a concept partition consisting of exactly the atomic
concepts. (Recall that the atomic concepts are the concepts with smallest extent containing each
of the objects treated as a singleton set. For instance, see the atomic concepts in the mammal
example on page 7.) A concept lattice need not have an atomic partition. For example, the lattice
in
Figure
3 (page does not have an atomic partition: The atomic concepts are c 0
, and c 3
however, c 1
and c 3
overlap - the object "chimpanzees" is in the extent of both concepts.
The atomic partition of a concept lattice is often a good starting point for choosing a modularization
of a program. In order to develop tools to work with concept partitions, it is useful to be
able to guarantee the existence of atomic partitions. This can be achieved by augmenting a context
with negative information (similar to what we did in Section 3.2).
Given a context (O; A;R), a complement of an attribute a 2 A is an attribute a such that
Rg. That is, a is an attribute of exactly the objects that do not have
property a. For example, in the attribute table on page 11, ff 5
and ff 6
are complements.
Given a context the complemented extension of C is the the context C
Ag and R Rg. A complemented
extension of a context is the original context with the attribute set augmented by the addition of
a complement for every original attribute.
It is straightforward to see that every context has a complemented extension, and that the
concept lattice of a complemented extension has an atomic partition. Using this fact, we can now
present an algorithm to find the all the partitions of a concept lattice.
4.2 Finding partitions from a concept lattice
Given a concept lattice, we define the following relations on its elements: The set of immediate
suprema of concept x, denoted by sups(x), is the set of lattice elements y such that x - y and there
are no elements z for which x - z - y. The set of ancestors of x, denoted by ancs(x), is the set of
lattice elements y such that y - x and y 6= x.
[1] A / sups(?) // the atomic partition
[4] while W 6= ; do
remove some p from W
[6] for each c 2 p
for each c 0 2 sups(c)
are disjoint
[14] endif
[15] endif
[16] endfor
[17] endfor
[18] endwhile
Figure
7: An algorithm to find the partitions of a concept lattice.
The algorithm builds up a collection of all the partitions of a concept lattice. Let P be the
collection of partitions that we are forming. Let W be a worklist of partitions. We begin with
the atomic partition, which is the set of immediate suprema of the bottom element of the concept
lattice. P and W are both initialized to the singleton set containing the atomic partition.
The algorithm works by considering partitions from worklist W until W is empty. For each
partition removed from W , new partitions are formed (when possible) by selecting a concept of
the partition, choosing a supremum of that concept, adding it to the partition, and removing
overlapping concepts. The algorithm is given in Figure 7.
In the worst case, the number of partitions can be exponential in the number of concepts.
In such a case (or any case where the number of partitions is large), it is possible to adapt the
algorithm to work interactively. After each new partition is discovered, the algorithm would pause
for the user to consider that partition. If it is on too fine a scale, the user would allow the algorithm
to iterate further to find coarser-grained partitions.
5 Implementation and Results
We have implemented a prototype tool that employs concept analysis to propose modularizations
of C programs. It is written in Standard ML of New Jersey (version 109.24) and runs on a Sun
under Solaris 2.5.1.
The prototype takes a C program as input. The default object set is the set of all functions
defined in the input program. The default attribute set consists of one attribute of the form "uses
the fields of struct t" for each user-defined struct type (or equivalent typedef) in the input
program. The user has the option to include attributes of the form "has a parameter or return
type of type t." The context can be formed as is, or can be formed in fully complemented form,
where for each user-defined struct type, the attributes of the form"does not use fields of struct
t" (or "does not have a parameter or return type of type t") are included in the attribute set of
the context.
The context is then fed into a concept analyzer, which builds the concepts bottom up as described
in Section 2. The user can then view the concept lattice or feed the lattice into the parti-
tioner, which computes (depending on the user's choice) all possible partitions or one partition at
a time.
The examples in this paper were analyzed by the implementation. Preliminary results on
larger examples appear promising. In particular, we have used the prototype tool on the SPEC 95
benchmark go ("The Many Faces of Go"). The program consists of roughly 28,000 lines of C code,
372 functions, and 8 user-defined data types. The concept lattice for the fully complemented context
associated with these functions and data types consists of thirty-four concepts and was constructed
in seconds of user time (on a SPARCstation 10 with 64MB of RAM). The partitioner identified
possible partitions of the lattice in roughly the same amount of time.
5.1 Case study: chull.c
chull.c is a program taken from a computational-geometry library that computes the convex hull
of a set of vertices in the plane. The program consists of roughly one thousand lines of C code. It
has twenty-six functions and three user-defined struct data types: tVertex, tEdge, and tFace,
representing vertices, edges, and faces, respectively. The context fed into the concept analyzer
consisted of the twenty-six functions as the object set, six attributes ("uses fields of tVertex",
"does not use fields of tVertex", etc.), and the binary relation indicating whether or not function
f uses fields of one of the struct types. The concept analyzer built twenty-eight concepts and the
corresponding lattice in roughly one second of user time. The lattice appears in Figure 8. The
partitioner computed the 153 possible partitions of the concept lattice in roughly two seconds.
Figure
8: The concept lattice derived from chull.c.
The atomic partition groups the functions into the eight concepts listed in Table 3. This
partition indicates that the code does not cleanly break into three modules (e.g., one for each struct
type). However, assuming that the goal is to transform chull.c into an equivalent C++ program,
the eight concepts do suggest a possible modularization based on the three types: Concepts 2,
3, and 4 would correspond to three classes, for vertex, edge, and face, respectively; concept 1
would correspond to a "driver" module; and the functions in concepts 5 through 8 would form four
"friend" modules, where each of the functions would be declared to be a friend of the appropriate
classes.
concept number user-defined struct types functions
MakeVertex, ReadVertices, Collinear,
ConstructHull, PrintVertices
3 tEdge MakeEdge
4 tFace CleanFaces, MakeFace
5 tVertex, tEdge CleanVertices, PrintEdges
6 tVertex, tFace
Volume6, Volumed, Convexity,
PrintFaces
7 tEdge, tFace MakeCcw, CleanEdges, Consistency
8 tVertex, tEdge, tFace
Print, Tetrahedron, AddOne,
MakeStructs, Checks
Table
3: The atomic partition of the concept lattice derived for chull.c.
6 Related Work
Although there is a growing body of literature concerning module and abstract-data-type recovery
from non-modular code (e.g., [13, 6]), we are unaware of previous work on the problem involving
the use of concept analysis. Because modularization reflects a design decision that is inherently
subjective, it is unlikely that the modularization process can ever be fully automated. Given that
some user interaction will be required, the concept-analysis approach offers certain advantages over
other previously proposed techniques, namely, the ability to "stay within the system" (as opposed
to applying ad hoc methods) when the user judges that the modularization that the system suggests
is unsatisfactory. If the proposed modularization is on too fine a scale, the user can "move up" the
partition lattice. (See Section 4.) If the proposed modularization is too coarse, the user can add
additional attributes to generate more concepts. (See Section 3.) Furthermore, concept analysis
really provides a family of modularization algorithms: Rather than offering one fixed technique,
different attributes can be chosen for different conditions.
The work most closely related to ours is that of Liu and Wilde [8], which makes use of a table
that is very much like the object-attribute relation of a context. However, whereas our work uses
concept analysis to analyze such tables, Liu and Wilde propose a less powerful analysis. They
also propose that the user intervene with ad hoc adjustments if the results of modularization are
unsatisfactory. As explained above, the concept-analysis approach can naturally generate a variety
of possible decompositions (i.e., different collections of concepts that partition the set of objects).
The concept-analysis approach is more general than that of Canfora et al. [2], which identifies
abstract data types by analyzing a graph that links functions to their argument types and return
types. The same information can be captured using a context, where the objects are the functions,
and the attributes are the possible argument and return types (for example, attributes
in the attribute table on page 8). By adding attributes that indicate whether fields of compound
data types are used in a function, as is done in the example used in this paper, concept-analysis
becomes a more powerful tool for identifying potential modules than the technique described in [2].
The work described in [3] and [4] expands on the abstract-data-type identification technique described
in [2]: Call and dominance information is used to introduce a hierarchical nesting structure
to modules. It may be possible to combine the techniques from [3] and [4] with the concept-analysis
approach of the present paper.
The concept-analysis approach is also more general than technique used in the OBAD tool [14],
which is designed to identify abstract data types in C programs. OBAD analyzes a graph that
consists of nodes representing functions and struct types, and edges representing the use of internal
fields of a struct type by a function. This recovers similar information to a concept analysis in
which the attributes are exactly those indicating the use of fields of struct types (for example,
and ff 5
in
Table
3.1 on page 8). However, OBAD will stumble on tangled code like that in
the example discussed in Section 3.2. The additional discriminatory power of the concept-analysis
approach is due to the fact that it is able to exploit both positive and negative information.
In contrast with the approach to identifying objects described in [1], our technique is aimed at
analyzing relationships among functions and types to identify classes. In [1], the aim is to identify
objects that link functions to specific variables. A similar effect can be achieved via concept analysis
by introducing one attribute for each actual parameter.
There has been a certain amount of work involving the use of cluster analysis to identify potential
modules (e.g., [5, 1, 7]). This work (implicitly or explicitly) involves the identification of
potential modules by determining a similarity measure among pairs of functions. We are currently
investigating the link between concept analysis and cluster analysis.
Concept analysis has previously been applied in a software-engineering tool, albeit for a problem
much different from modularization: the NORA/RECS tool uses concept analysis to identify
conflicts in software-configuration information [11].
Acknowledgements
This work was supported in part by the National Science Foundation under grant CCR-9625667
and by the Defense Advanced Research Projects Agency under ARPA Order No. 8856 (monitored
by the Office of Naval Research under contract N00014-92-J-1937).
The comments of Manuvir Das on the work reported in the paper are greatly appreciated.
--R
A greedy approach to object identification in imperative code.
Experiments in identifying reusable abstract data types in program code.
Program comprehension through the identification of abstract data types.
System structure analysis: Clustering with data bindings.
Evaluating process clusters to support automatic program understanding.
objects in a conventional procedural language: An example of data design recovery.
Practical program understanding with type inference.
Program generalization for software reuse: From C to C
Reengineering of configurations based on mathematical concept analysis.
Restructuring lattice theory: An approach based on hierarchies of concepts.
Second Working Conference on Reverse Engineering.
Recovering abstract data types and object instances from a conventional procedural language.
--TR
--CTR
Peter Wendorff, A formal approach to the assessment and improvement of terminological models used in information systems engineering, ACM SIGSOFT Software Engineering Notes, v.26 n.5, Sept. 2001
Fuh-Gwo Chen , Ting-Wei Hou, Instruction-coated translation: an approach to restructure directly threaded interpreters with low cohesion, ACM SIGPLAN Notices, v.41 n.8, August 2006
Kamran Sartipi , Kostas Kontogiannis, A user-assisted approach to component clustering, Journal of Software Maintenance: Research and Practice, v.15 n.4, p.265-295, July
Andrew Sutton , Jonathan I. Maletic, Recovering UML class models from C++: A detailed explanation, Information and Software Technology, v.49 n.3, p.212-229, March, 2007
M. Di Penta , M. Neteler , G. Antoniol , E. Merlo, A language-independent software renovation framework, Journal of Systems and Software, v.77 n.3, p.225-240, September 2005
Andreas Christl , Rainer Koschke , Margaret-Anne Storey, Automated clustering to support the reflexion method, Information and Software Technology, v.49 n.3, p.255-274, March, 2007
Rainer Koschke , Gerardo Canfora , Jrg Czeranski, Revisiting the IC approach to component recovery, Science of Computer Programming, v.60 n.2, p.171-188, April 2006
Gerardo CanforaHarman , Massimiliano Di Penta, New Frontiers of Reverse Engineering, 2007 Future of Software Engineering, p.326-341, May 23-25, 2007 | concept analysis;software restructuring;design recovery;reverse engineering;software migration;modularization |
330351 | Managing Process Inconsistency Using Viewpoints. | AbstractThis paper discusses the notion of process inconsistency and suggests that inconsistencies in software processes are inevitable and sometimes desirable. We present an approach to process analysis that helps discover different perceptions of a software process and that supports the discovery of process inconsistencies and process improvements stimulated by these inconsistencies. By analogy with viewpoints for requirements engineering that allow multiple perspectives on a software system specification to be managed, we have developed the notion of process viewpoints that provide multiperspective descriptions of software processes. A process viewpoint includes a statement of focus or world-view, a set of sources of process information, a process description and a set of organizational concerns that represent goals or constraints on the process analysis. We present a description and rationale of process viewpoints, discuss the process of applying process viewpoints for process understanding and illustrate the overall approach using part of a case study drawn from industrial processes that are part of a safety-critical system development. | Introduction
Over the past few years, there has been a growing awareness that organisations can leverage
significant productivity improvements by improving their business processes. Software
development processes are one class of business process which have received particular
attention and, since the early 1980s, a significant body of research into software processes
and software process improvement has developed.
Very roughly, this research can be split into two main categories:
1 . Fundamental research concerned with process modelling, enactment and support
technology. This involves developing notations for representing processes and looking at
how some automated support for these processes can be provided. Good summaries of
research in this area are provided by [1] and [2].
2 . Research concerned with process improvement, that is, the introduction of process
changes to improve software productivity, quality, etc. This is related to general work on
business process re-engineering [3] although software process improvement programmes
tend to be evolutionary rather than revolutionary. The best known work in this area is that
of the Software Engineering Institute and their Capability Maturity Model [4, 5] but other
related work on maturity models for process improvement has also been carried out [6]
To a large extent, these different aspects of process R&D have been parallel research streams.
The fundamental process research has not really addressed the issues of how the research
results can be applied to facilitate change and improvement. The process improvement work
has taken a pragmatic approach to process description and is not dependent on structured or
formal process notations. Proponents of this approach suggest that automation is not central
to process improvements. Indeed, the SEI model discourages automation in the early stages
of improvement. Its developers maintain that process enactment support can only be cost-effective
once a disciplined and repeatable process has been put in place.
The work which we describe in this paper falls somewhere in between these areas of
research. It was developed as part of a pragmatic process improvement project which was
specifically aimed at discovering routes to requirements engineering process improvement
[8]. However, to facilitate process elicitation for analysis and improvement, we decided that
we needed a structured framework managing process descriptions which could cope with
different types of process description and which, furthermore, reflected the organisational
goals for process improvement.
Our studies of requirements engineering processes (and, previously, other types of software
process [9]) revealed remarkable process inconsistency both within and across organisations.
Inconsistencies in processes do not necessarily mean that there are process problems. They
can arise for a number of good reasons:
1 . Many tasks require software engineers to exercise professional judgement and
experience. Different people may do the same thing in different ways, all of which may
be effective.
2 . Different processes with the same objectives may have developed in different parts of the
organisation as a consequence of local factors such as tool and equipment availability,
management style etc.
3 . Depending on their particular interests, different people may focus on different aspects of
a process. For example, a quality manager may be mostly concerned with process
activities which affect product quality whereas a project manager will be concerned with
s
the scheduling and resources used by activities. They may selectively ignore parts of the
process which are not of interest. This can result in inconsistent models of what the
process actually is.
4 . Processes vary depending on the type of system being developed and the participants in
the process. This is particularly true of processes, such as requirements engineering
processes, which are not solely technical but which involve significant interaction with
customers and others outside the software development team.
The primary goal of our work was to facilitate process understanding and improvement. We
needed a way of describing processes which would allow us to collect different perceptions
of these process and reconcile the differences between them. We wanted to understand
different process perspectives with a view to discovering the best practice and to identify
areas where improvements seemed to be viable. Process inconsistencies were particularly
important for two reasons:
. Areas of inconsistency may suggest process redundancy where activities are unnecessary.
Fully or partially removing this redundancy may lead to process improvements.
. Areas of inconsistency can highlight particularly good practice. Where different processes
for accomplishing some goal have evolved, the most effective of these processes may be
selected for widespread dissemination across the organisation.
Most research on process modelling notations [10] has not really taken potential process
inconsistency into account. Most of this assumes that a single model of the process can be
elicited and agreed by all process participants. However, our experience suggested that this
was very difficult to achieve for many software processes:
. The processes as seen by different process participants and stakeholders were often so
inconsistent that no definitive model could have been produced. Where process
descriptions from different participants could be integrated, the combined description was
complex and virtually impossible to understand or validate.
. There was no single notation which all process participants and stakeholders were
familiar and comfortable with. People wanted to describe 'their' process in their own way
and we felt that imposing some, perhaps formal, notation on this would have been
counter-productive. We believe that there is a lot to be said for Checklands 'rich pictures'
[11] where processes are described using representations which are meaningful to the
people involved in the process.
Consequently, we decided not to look for a single process representation but to allow the
process to be represented in different ways reflecting different process perceptions. The
process representation framework had to cope with inconsistency and support process
improvement. Our previous experience of viewpoints for requirements engineering [12, 13]
suggested that viewpoints might be an appropriate framework for encapsulating partial
process knowledge. We decided to adapt the approach to requirements viewpoints which we
had developed [14] for process representation.
Other researchers have come to comparable conclusions and have experimented with a multi-perspective
approach to process modelling. The earliest work we are aware of was work
carried out by Rombach in the late 1980s in development of MVP-L [15], a process
modelling language which supported multiple views. This work has continued with a newer
version of the language MVP-L 2 now available [16] . Verlage [17] has confirmed the needed
for a multiple perspective approach to process modelling and presents a set of requirements
for this approach. Turgeon and Madhavji [18] have also developed a multi-view approach to
s
elicitation with some automated consistency checking. This is based on their work on process
elicitation and the methodological and tool support which has developed for this [[19] [20].
Previous work on managing process inconsistencies across viewpoints has mostly derived
from comparable work in viewpoint-oriented software engineering. This relies on automatic
or semi-automatic analysis of formal process descriptions to discover inconsistencies across
these descriptions [21]. We have not followed this path because we found that practitioners
were not interested in learning formal process representation languages. Furthermore, the
complexity and subtlety of real processes made it almost impossible to produce formal
process descriptions which were understandable by the people involved in the process.
Rather, we have focused on developing a framework and an associated process which
facilitates elicitation and process analysis by people rather than computers. This framework
does not really on any formal representation and analysis but rather is a way of structuring
process information to facilitate analysis and process improvement.
In the remainder of this paper, we present the process viewpoint model which we have
developed, discuss the process of applying viewpoints for process analysis and present a
small example of using the approach for the analysis of process descriptions in a safety-critical
system.
Process view points
The PREview approach to requirements engineering which we have developed [14] was
explicitly designed for application in a range of industrial [22]. This viewpoint-oriented
approach allows for different types of viewpoint (end-user, stakeholder, domain) to be
accommodated within a single generic framework and provides a mechanism whereby
business goals and constraints drive the requirements engineering process.
Given the need for process improvement to be driven by business goals, we attempted
initially to use an almost identical viewpoint model for process analysis [23]. We found that
this was not entirely appropriate so we derived a simpler model of a process viewpoints with
5 components as follows:
process description, concerns}
We discuss these components in more detail below but briefly the name identifies the
viewpoint, the focus defines the particular viewpoint perspective, the sources document the
source of process information, the process description is a description of the process from
the perspective defined in the viewpoint focus and the concerns reflect business goals and
constraints.
Figure
1 shows the relationships between these components of a process viewpoint. The
shaded boxes are components of a viewpoint.
s
Organisation
Questions Sources Process description
Sets
Lead to
Help
identify
Put to
Process viewpoint
Refine
Lead to
Concerns
(Goals/ constraints) Focus
Limits
Figure
Process viewpoint relationships
The business concerns help identify what the focus of a process viewpoint should be and this
restricts the process description. Questions which are derived from concerns are critical to the
process of eliciting process information from sources and discovering process
inconsistencies. Answers to these questions, which are put to sources, lead to the
formulation of a process description.
A very simple example of a process viewpoint is shown in Figure 2. This is a viewpoint on a
requirements review process. For simplicity, we have not actually included the process
description here.
Name Quality management
Focus The requirements review process and how overall system quality
may be influenced by that process.
Product defects are being introduced as a result of requirements
errors. Product development schedules are longer than they
should be because of the need to detect and remove these errors.
Sources project managers, quality managers, company standards,
Concerns Time to market, product defects
Process description A description of the review process including inputs, outputs,
activities, process participants and commentary on the process
and its influence on system quality
Figure
process viewpoint
Now let us look at the different components of a process viewpoint in more detail and discuss
why these have been included.
Viewpoint name
The name of the process viewpoint identifies it and gives the reader an indication of the likely
perspective of the viewpoint. It may therefore be:
. The name of a role or department in an organisation such as 'Configuration management',
'Quality assurance', `Customer', etc. This implies that the process description will focus
on the process activities, inputs and outputs which are most important to that department
or role.
. The name of a process characteristic which is of particular interest. This can either be a
functional characteristic such as 'Process activities', `Roles and actions' or can be a non-functional
process attribute such as 'Repeatability', `Performance', etc. A particular type
of modelling notation such as a data-flow diagram or a Petri net may be particularly
appropriate for describing the process.
Viewpoint focus
A viewpoint's focus is a statement of the perspective adopted by that viewpoint. This should
normally include a statement of the parts of the overall software process i.e. the sub-processes
with which the viewpoint is concerned, It may also include a statement of the
organisational functions which are of most concern in the analysis of a process, a statement
of the role of viewpoint sources or a statement of the particular type of model which will be
presented.
Examples of focus descriptions might therefore be:
"Configuration management activities in the requirements engineering process"
"A system architect's view of the requirements engineering process''
"An entity-relational model of the documents used in the requirements engineering
process"
We have found that explicitly defining the focus of a viewpoint is valuable for three reasons:
1 . It helps to identify sources of process information.
2 . It can be used in the development of organisational concerns (see below).
3 . It can be used to help discover overlapping viewpoints (where conflicts are most likely)
and gaps in the viewpoint coverage of the process.
The viewpoint focus may also have an associated rationale which is comparable to the notion
of Weltanschauung or 'world view' in Software Systems Methodology [11, 24]. This
rationale presents assumptions on which the viewpoint is based and helps the reader
understand why the viewpoint has been included. Examples of rationale which could be
associated with the above focus descriptions are:
"Our current configuration management process is not integrated with our requirements
engineering process"
"System architects are normally consulted after the requirements have been defined and
this can cause serious design problems"
"We need a formal description of the process entities to support improved configuration
management"
Viewpoints need not be completely separate but may have overlapping foci. However, where
there is a significant overlap, we recommend that the different viewpoints should be
integrated into a single viewpoint.
Viewpoint sources
Viewpoint sources are an explicit record of where the information about the process has been
collected. The most important sources of process information are usually:
1 . The participants in the process
. Management in the organisation where the process is being enacted.
3 . Organisational process charts, responsibility charts, quality manuals, etc.
The list of sources connected with a viewpoint is useful because it provides an explicit trace
to where the process information was derived. This allows the original sources to be
consulted for possible problems when process improvements and process changes are
proposed. Source information may be represented as names, associated roles and contact
information if the sources are people, document identifiers and page references, WWW
URLs, etc.
Process description
We do not mandate any particular notation for process description. Our experience showed
that most people preferred informal process descriptions made up of informal diagrams and
explanatory text. While these are more subject to misinterpretation that formal descriptions,
we believe that this is more than compensated for by their understandability and flexibility in
describing processes where exceptions are common. Of course, for some viewpoints which
are concerned with particular types of process model, such as an entity-relational model, an
appropriate formal or structured notation may be used.
Because of individual differences in process enactment, there may be alternative perceptions
of a process presented by different sources in the same viewpoint. This is particularly likely
where one of the sources is process documentation which defines the organisational
perception of a process (or what a process ought to be) and another source is a process
participant who can explain what really happens. If these differ very radically, they should
really be separate viewpoints but where the differences are in the detail of the enactment, they
can be accommodated within a single process description.
We could support this through a viewpoint inheritance mechanism. However, this leads to a
viewpoint explosion and it is difficult to manage the large number of viewpoints which are
then created. Rather, the differences can be accommodated by including a stable part and a
variable part in the process description:
1 . The stable part of a process description is the part of the description which is shared and
accepted by all of the sources contributing to the process viewpoint.
. The variable part of the process description highlights those parts of the process which
exhibit variability and documents the different ways in which this variability occurs. In
many cases, the variability manifests itself in the exception handling - different people
cope with problems in different ways.
Describing processes using stable and variable parts is one way of tolerating inconsistency in
process descriptions. As we discuss in the following section which describes the process of
acquiring process descriptions, we try to reconcile inconsistencies as soon as they emerge
but, if this is impossible, we simply leave them in the description. The inconsistency analysis
which we also discuss later is then applied within the viewpoint as well as across process
viewpoints.
The process description may be a hierarchical description with the process described at
different levels of abstraction. At the top level, we recommend that the process description
should fit onto a single page so that it may be understood as a whole. All or some parts of the
process may then be described in more detail as necessary.
Concerns and concern decomposition
Process improvement should always be driven by the needs of the organisation enacting the
process. To allow for this, we associate concerns with viewpoints where concerns reflect
organisational goals, needs, priorities, constraints, etc. Concerns help to scope the analysis
of current processes and proposed process improvements.
s
There are different types of concern which may be associated with process viewpoints:
1 . Understanding concerns. These reflect the organisation's objectives for process
understanding. The organisation may wish to understand a process to discover its
relationships with other organisational processes, to define the process in a quality plan,
to analyse the process for improvements, etc. Where an organisation has immature
processes, understanding these processes is the first step towards process definition and,
ultimately, improvement.
2 . Improvement concerns. These reflect the objectives of the organisation as far as process
improvement is concerned. At a very abstract level, these may be reduced time to process
completion, reduced process costs, etc. However, as we discuss below, these have to be
decomposed into realistically achievable goals.
3 . Constraint concerns These are organisational constraints placed on the process or on the
process improvement activity. They may limit the analysis or possible process
improvements.
Concerns are not just another type of viewpoint. A viewpoint is an encapsulated partial
process description; a concern relates the process description to the business needs of the
organisation enacting the process. The concerns drive the process analysis so that proposed
process changes and improvements contribute to the real needs of the business.
In practice, concerns are decomposed into a set of questions which are put to process
sources. Therefore, if an understanding concern is process definition, then this may be
translated into an abstract question:
"What are our requirements engineering process activities"
If an improvement concern is process cost reduction, the most abstract question becomes:
"How can the costs of the process be reduced"
However, these questions are so general and abstract that they are not particularly useful
when analysing processes. Therefore, concerns are decomposed into sub-concerns and
ultimately, into a specific question list which may be put to viewpoint sources.
As an illustration of this, Figure 3 shows how the above improvement concern might be
decomposed. Notice that improvement concerns will almost always decompose to a mixture
of understanding sub-concerns (what are we doing now?) and improvement sub-concerns
(how can we do it better?). It is artificial to try to separate these as knowledge which can
contribute to improvement emerges naturally as understanding of the process is developed.
Reduce costs
Cost
breakdown
Activity
resource usage
Activity selection Cost rationale Improvement
costs
Improvement
savings
Types of
resource
Value for
money
Q1 Q2 Q3 Q4
Figure
3 Decomposition of a cost reduction concern
s
Specific questions may be associated with nodes in this decomposition. In this example, we
may have:
Q1. What are the types of resource used in each activity
Q2. How much of each resource is used in each activity?
Q3. Does the output from the activity justify the resource input?
Q4. Are there comparable activities which use disproportionate amounts of some
The formulation of questions helps identify the level of detail which should be included in a
process description. The description must be constructed so that these questions can be
answered. Therefore, for the above set of questions, the process description should identify
the activities for which resource utilisation information is available. If this information has
not been collected, process improvements cannot be evaluated. It implies that the project
management process should be improved to collect more detailed process data.
As well as understanding and improvement concerns, organisations will also place
constraints on both the process improvement process and the possible improvement
proposals. These constraints may also be expressed as concerns and decomposed into
questions which must be addressed during the stages of process analysis and improvement
suggestion. Examples of concerns which are constraints might be:
1 . Budget The budget available to the process improvement team is $XXXX.
2 . Existing tools and standards Existing design notations such as SADT must be used to
describe the system requirements.
3 . Training Proposed improvements should require no more than Y days of additional
training time per team member (or alternately Z days across the whole team).
These concerns are process requirements and process improvements which are proposed
should be validated against these global requirements.
We know from experience of goal-oriented process improvement methods such as the ami
method [25] that the decompositions of concerns (or goals) and the formulation of questions
is not a simple process. In some cases, it may be easier to tackle this bottom-up rather than
top-down. That is, a standard list of questions derived from other analyses may give clues to
possible concerns and how these concerns may be decomposed. Once formulated, concerns
and questions should therefore be made available to all subsequent process analyses.
Using process viewpoints in process analysis
The process viewpoint model which we have described is intended to help elicit and analyse
information about processes with a view to subsequent process improvement. To support the
use of the model, we have developed a process for process modelling and improvement
which is shown in outline in Figure 4.
In the notation which we use here and in later figures describing processes, process activities
are denoted in boxes. Dashed arrows linking boxes mean that there is a temporal relationship
between these activities. It is not possible for an activity at the destination of an arrow to be
completed until the activity at the source of the arrow has completed. However, the
destination activity may start before the source activity has completed and the activities may
be interleaved or may run concurrently. In Figure 4, the arrow linking the last box with the
first box means that the process is cyclic and can be re-entered after a set of improvements
have been proposed.
s
Organisational
goal and constraint
definition and
question formulation
Viewpoint and
source
identification
Viewpoint data
collection
and process
description
Improvement
suggestion
and analysis
Organisational concern
definition
Understanding and analysis Improvement planning
Time
Figure
Process improvement with viewpoints
The overall process has 4 main phases:
1 . Concern definition During this phase, the main business goals and constraints are
identified. The people involved are the process improvement team, senior managers in the
organisation and project managers. Concerns are decomposed into a set of questions as
discussed in Section 3.5. A problem at this stage is ensuring that the concerns are
realistic. Where organisations have immature processes, it is often only possible to set
goals which will bring these processes under control. Process improvement goals such as
reduced costs or elapsed time may not be possible.
During the definition of concerns, it is usually necessary to develop a simple process
description so that a basis for discussion is established. This may either come from
existing documentation or from any process stakeholder.
. Viewpoint and source identification Possible viewpoints and associated sources of
process information are identified. We discuss this stage in more detail below.
3 . Data collection and process description Information about the process is collected and
the process is documented. Again, we discuss this in more detail below.
4 . Improvement suggestion and analysis The processes as described in each viewpoint are
compared and overlaps, inconsistencies and conflicts are identified in a review involving
process participants and the process improvement team. Process inconsistencies and
redundancies are the focus for improvement and may point to potential process
modifications to select best practice or to reconcile these inconsistencies. All
improvements are analysed against the identified concerns to ensure that they are
consistent with business goals.
As the theme of this paper is process viewpoints and the support they provide for managing
inconsistency, we will ignore the first and last stages in the process shown in Figure 4 and
will concentrate on the middle two stages where viewpoints are applied.
Figure
5 is a more detailed description of these stages. In this figure, solid arrows between
activities indicate data flow.
s
viewpoints
Reconcile
viewpoints
sources
sources
Create process
description
Analyse
inconsistencies
Refine process
description
Concerns
Questions Viewpoints
Sources
Concerns
Questions
Inconsistency
report
Process
descriptions
Viewpoint and source identification Viewpoint data collection and process description
Figure
5 Viewpoint identification and process elicitation
The whole process is iterative and we recommend that it should begin as soon as some
viewpoints have been identified. Once a viewpoint has been identified, some information
about the process can be collected and this may then be used to help with the identification of
further viewpoints.
Viewpoint and source identification
This stage of the process is concerned with identifying relevant viewpoints and the
information sources associated with these viewpoints. Viewpoints and their sources are
identified in an iterative way. In practice, these activities intermingle so that, a complete list of
viewpoints and sources is not available until the end of the activity.
The inputs to this activity are concerns and associated questions. These questions may either
elicit process details, discover rationale for process activities, discover information about the
timing, duration and inter-dependencies of activities or the support for the process which is
available.
The sub-activities involved in this stage are:
. Identify viewpoints This activity is concerned with identifying the most appropriate
process perspectives which can contribute useful information about the process and
representing these as viewpoints. As a starting point, viewpoints covering organisational
standards, process participants, management and, where appropriate customers should be
identified. There should be no restrictions on the numbers of viewpoints identified at this
stage.
. Reconcile viewpoints To make the analysis practical, it is important not to have too many
viewpoints. We have found that more than 5 viewpoints inevitably cause information
management problems. In this activity, the focus descriptions of the different viewpoints
are used to identify overlaps and areas where viewpoints may be merged. Where the
focus descriptions are too broad and encompass extensive processes (e.g. all of software
testing), we recommend that the scope of the viewpoint should be reduced and the focus
description should be rewritten to be less ambitious.
. Identify viewpoint sources Viewpoint sources are information sources which can adopt
the viewpoint focus. They may be people, documents, standards, domain knowledge,
etc. These are usually identified by consultation with managers and engineers involved in
the process.
The process of viewpoint identification may bring concerns to light which have not already
been considered. Therefore, it may be necessary to revisit the previous stage to refine these
concerns before moving on to elicit information about the process description.
s
Viewpoint data collection and process description
This stage of the process improvement process is concerned with understanding, analysing
and describing the current process which is used. The general approach which we
recommend is an incremental application of the steps described below for each identified
viewpoint, refining the questions and descriptions as process information from viewpoint
sources is elicited. That is, an initial set of questions to elicit process information is derived,
viewpoint sources are consulted and a process description is proposed. The analyst then
refines the questions and repeats this consultation and refinement process until all viewpoints
have been covered.
The stages in this process are:
. Consult process viewpoint sources The analyst puts the questions derived from concerns
to the viewpoint sources to discover process information. These questions may need to be
refined for the specific viewpoint (e.g. by changing the terms used) depending on the
background of the source. As well as the questions, of course, sources should be asked
to comment on their processes. We have found that the best way to elicit information is to
ask them to critique an existing process model which may be derived when concerns are
established. The process description focuses the elicitation as stakeholders can point out
where it is incomplete and differs from their actual process. Once more detailed process
description has been elicited from one viewpoint, it may then serve as a basis for
discussion about the process in other viewpoints.
. Create process descriptions Create and document a process description, taking into
account the differences as seen by different viewpoint sources. Any notation may be used
here but we have found that simple block diagrams supplemented by tables and natural
language text are readily understood.
. Analyse inconsistencies This activity is concerned with analysing the process
descriptions, to discover redundancy and inconsistencies. We discuss this in more detail
in Section 4.3.
. Refine process descriptions The results of the inconsistency analysis are fed back to the
process sources and, where appropriate, the process descriptions are modified. For
example, where different people use different names for the same process, a single term
may be agreed. Where inconsistencies cannot be reconciled, they are documented in an
inconsistency report which is an input to the next phase of the process concerned with
process improvements.
This activity is also likely to reveal problems with the identified concerns and questions.
Some iteration may be required to redefine the concerns and the associated questions.
Inconsistency analysis
The process of inconsistency analysis is intended to discover and classify inconsistencies in
processes. The processes are reviewed by a team including process participants and
members of the organisational process improvement group and inconsistencies are
highlighted. However, no decisions about process modifications are made at this stage. This
happens in a subsequent process where the concerns are used to decide which process
changes are the most effective way of contributing to the business goals.
This has been shown as a separate process stage in Figure 5 but, in fact, much of the work
actually takes place during the elicitation of process information. Process elicitation for the
s
different viewpoints is a sequential process. Once a process description is available from one
viewpoint, it may be used as an input to the next elicitation activity. During that activity,
process stakeholders identify inconsistencies by pointing out how their view of the process
differs from the view which is presented to them. In some cases, where inconsistencies are a
result of misunderstandings (for example, where different terminology is used) it may be
possible for the people involved to see immediately how to resolve the problem and the
process descriptions are changed during elicitation to remove the inconsistency. In other
cases, however, the inconsistency reflects a genuine difference and it is documented for
subsequent analysis.
When process descriptions have been documented, the inconsistency analysis process can
get underway. There are two fundamental stages in this process:
1 . Pre-review analysis This is undertaken by a member of the process improvement team.
He or she looks at the process descriptions to identify areas of similarity and difference.
Inconsistencies which were identified during elicitation are an input to this. Terminology
is always a problem and, ideally, a process glossary should be created. In practice,
however, there may not be time to do this. The result of this activity is an agenda for the
process review which lists the inconsistencies and the process fragments which must be
discussed.
. Process review meeting This is a review meeting which is comparable to a program
inspection meeting where a process or process fragment is presented to the meeting and
discussed by the meeting participants. Several different views of the same process may be
presented if necessary. Each of the inconsistencies is considered in turn and classified as
discussed below. The outcome of the meeting is an inconsistency report which is passed
on to the next stage of the process improvement process.
Inconsistencies are classified as shown in Figure 6.
Inconsistency type Explanation
Irrelevant There is an inconsistency in processes as seen from different viewpoints but
this has no practical effect on process efficiency. An example of this might
be where different engineers use judgement to decide how to carry out a
particular process and it doesn't matter which approach is used.
Necessary The inconsistency in processes must be maintained because it has arisen as a
consequence of some external factors which are outside the influence of the
process improvement team. For example, teams in the same organisation
working in different countries may not be able to resolve some process
inconsistencies because they arise as a consequence of local laws
Reconcilable These are process inconsistencies which appear to be reconcilable and where
some standardised process can be developed. These often reflect process
redundancy or omissions. Removal of redundancy may lead to process
improvement. An example of this type of inconsistency is where the same
process specified in different viewpoints seem to require different inputs. If
the process operates efficiently, it may be possible to reduce the number of
inputs to a minimum.
Improveable These are inconsistencies where different sub-processes are used and where
some of these sub-processes are clearly better than others. Process
improvement may be possible by identifying the best practice and adopting
this across all viewpoints.
Figure
6 Classes of process inconsistency.
s
A simple case study
In this section, we present part of a process analysis case study based on a real industrial
process which illustrates the use of process viewpoints. In this case, our objective was to
discover process inconsistencies and report them to the organisation who 'owned' the
process. It was then up to them to consider what improvements might be possible.
The process concerned was the development process for a safety-related expert system which
was being developed for a large client. The system involved the development of a 'safe'
expert system shell and the instantiation of this shell with specific domain data. The system
was to be formally specified and the implementation validated by correctness arguments (not
a complete proof) against the specification. The organisation developing the system is a
specialist in critical computing systems, is technically mature and is strongly committed to
quality and quality improvement.
Concerns and questions
The first stage in the process analysis is to identify the organisational concerns which
contribute to the process analysis. As this is a safety-related system, the most important of
these concerns was safety. A critical business goal of the organisation was to ensure that the
systems it developed were safe (and were known to be safe) and processes always had to
take this into account. A further business concern in this particular case was customer
relationships. While this should, perhaps, always be a concern, the customer in this case was
a technically mature, large organisation and a likely source of future contracts. It was
particularly important to ensure that good relationships were maintained throughout the
project. The final concern identified was skill utilisation. The organisation developing the
system is very highly skilled in a number of areas and processes had to be designed to utilise
these skills.
To illustrate question derivation, let us consider the customer relationships concern. Some
relevant process questions associated with this concern are:
1 . What process and product documentation does the customer require?
. What are the most important concerns (schedule, cost, standards compliance, quality,
etc.) of the customer?
3 . Will the customer participate in project management meetings?
4 . Will a customer engineer be involved in reviews? If so, when will they need documents
to be delivered?
5 . Does the customer mandate the use of specific standards, tools or techniques?
Notice that these questions are not concerned with broad aspects of the process such as
activities, inputs and outputs. Typically, questions derived from concerns ferret out
significant process details which could be easily be missed by analysts. By applying these
questions during process elicitation, we can discover if any aspects of the process are
inconsistent with the business goal of customer satisfaction.
Viewpoint identification
The next stage of the process of analysis is to identify relevant viewpoints. The process we
recommend involves identifying as many viewpoints as possible then reconciling them to
fewer than five viewpoints for analysis. In this case, we will skip the first stage and simply
discuss the final viewpoints used in the analysis. Three viewpoints were identified as
significant in this case:
. A project management viewpoint
. A quality management viewpoint
. An engineering viewpoint
The scope of each of these viewpoints is defined by describing their focus. All viewpoints are
concerned with the whole of the development process for the system so this need not be
explicitly set out in the viewpoint focus descriptions:
1 . Project management The project management viewpoint is concerned with the process as
defined by those activities which are identified in the project plan and which are assigned
specific resources and a schedule for completion.
2 . Quality management The quality management viewpoint is concerned with those aspects
of the process where the customer requires explicit evidence of validation activities and
the conformance of process deliverables to requirements and standards.
3 . Engineering The engineering viewpoint is concerned with the process which is actually
enacted by the engineers involved in the system development.
In conjunction with the process of viewpoint identification, sources of information associated
with the viewpoint should also be identified. The focus description may be used as a starting
point for identifying information sources. In this case, once one source was identified, he
helped in the identification of other possible sources.
For brevity, we will focus in the remainder of this discussion on the first two viewpoints
above namely quality management and project management. Possible sources of information
associated with these viewpoints are:
. project management - project plan, project manager, customer project manager, software
developers
. quality management - project quality plan, organisational quality manual, quality
manager, customer project manager
Of course, it is not always possible to get access to all sources. In this particular case, we had
close links with the development organisation but no access to their customer. We were
therefore unable to get any information from customer-based sources.
Process description
Once the sources have been identified, they are consulted and a process description is
developed. The principal source for the project management and quality management
viewpoint was the very comprehensive process documentation which had been produced to
satisfy the quality requirements of the customer.
From the project management viewpoint, resources and schedules had been drawn up for 21
activities; from the quality management viewpoint, 36 explicit validation activities were
identified. In each of these cases, there were logical groupings of activities so it was relatively
straightforward to produce a more abstract model of the process.
This high-level project management model is shown in Figure 6. Again, the dotted arrows
mean temporal sequence where a destination activity may start but may not finish before the
source activity has finished. Where activities are vertically aligned, this means that they may
(but need not) be carried out in parallel.
s
Technology
assessment
UI
development
Shell
specification
Shell
implementation
System
specification
System
implementation
Documentation Acceptance
testing
Figure
6 Project management model
From the quality management perspective, the high-level model almost matched this project
management model. There were validation activities for each of the development activities in
Figure
6. This model is illustrated in Figure 7.
UI validation
Shell spec.
validation
Shell imp.
validation
Sys. spec.
validation
Sys imp.
validation
Documentation
review
assessment
review
Safety plan
review
Figure
7 Quality management model
As you would expect from an organisation developing safety-related systems, at this level
there are no significant process inconsistencies. However, we can see on the left of Figure 7
that there is a required review of the project safety plan with no corresponding activity in the
project management model. This is almost certainly an accidental omission and consistency
can be achieved by adding an additional activity to the project plan.
Further inconsistency analysis requires a more detailed look at the process. Let us look at the
processes, inputs and outputs for the shell specification activity shown in Figure 6. This
more detailed model is shown in Figure 8. Solid arrows linking boxes indicate data-flows.
Note that the System SRS is a software requirements statement, written in English, which
was produced by the customer for the system.
Formal
specification
of shell
Consistency
validation of
shall spec.
Validation of
spec against
Animation of
formal spec.
Testing using
animated formal
spec.
System SRS
Existing
prototype
Formal
spec. (v1)
Revised
formal
spec. (v2)
System SRS
Revised formal spec. (v3)
Shell
prototype
Test plan
Revised formal spec. (v4)
Test coverage report
Figure
Project management model of shell specification
s
The comparable quality management model which identifies validation activities, inputs and
outputs is shown in Figure 9. In this process, each validation activity produces a report and
has a set of associated success criteria. These are not really inputs to the next stage (the inputs
to validation are outputs from development activities) so are not shown as data-flows on this
diagram.
Formal
spec.
Revised
formal
spec. System SRS
Shell
prototype
Test plan
Static analysis of
formal spec.
Internal
consistency
validation
Validation of
formal spec.
agains SRS
Review of shell
test plan
Review of revised
shell test plan
Validation of
animated shell by
testing
Revised
formal
spec.
Revised
test plan
Revised
test plan
Figure
9 Quality management view of shell specification validation
Inconsistency analysis
In analysing these more detailed models for inconsistency, we put the models side-by-side
and ask a number of questions:
1 . What is the correspondence between tasks for which resources have been allocated and
identified validation activities? Are there any mismatches i.e. development activities with
no corresponding validation activity or validation activities with no allocated resources?
. Are all of the required inputs for validation specified as outputs in the project manager
viewpoint?
3 . Is the information provided complete i.e. is it clear which results from the development
activities are inputs to the validation activity?
When we examine the process descriptions in the project management and process
management viewpoint and put these questions to the models, we found inconsistencies in
the process. We could not, in fact, classify these according to the inconsistency classification
because our remit was process analysis and not process improvement in this case. All we
could do was to highlight them in an inconsistency report. It is up to the organisation which
'owns' the process to decide if they are significant and what, if anything, to do about them.
When we put the process fragments shown in Figures 8 and 9 together and examine them.
we cam see that three validation activities have resources allocated to them and (somehow)
these must map onto the six activities identified in the quality plan. When we looked at the
processes in detail using the above questions, we found three possible inconsistencies:
1 . The quality management view requires a static analysis of the formal specification
(Activity 1) which is not part of the project management view. This could be
accommodated in either activity 1 or activity 2 in the project management model or may be
an omission.
It is not clear whether or not it is essential that the specification should 'pass' the static
analysis with no significant errors or how the results of the static analysis are used in the
internal consistency check of the specification. Is the static analysis report delivered to the
customer or not? Resources for this static analysis do not appear to be explicitly allocated
in the project plan.
. There is no activity in the project management model where test planning is explicit and
the test plan is not specified as an output of any activity. It is, however, an input to
validation activities 4 and 5 which review the plan. As there are two test plan reviews, it
is clear that, from a quality management viewpoint, significant effort should be devoted to
test planning. Where will the resources for this come from?
3 . The quality management viewpoint identifies inputs as 'Revised formal spec.' but it is not
clear which version of these inputs should be used. That is, the project management
model assumes a number of revisions of this specification (identified in the process as
V1, V2, V3 and V4) but the quality management viewpoint does not explicitly state which
of these are inputs to which validation activity.
The process of process analysis is continued for the other process fragments associated with
the activities identified in Figures 6 and 7. We will not show this here as it simply involves
applying the same process as we have described for the shell specification.
Overall, we identified a total of 14 inconsistencies between the processes as seen from the
project management and quality management perspective. The majority of these seemed to
be omissions and should be fairly easy to resolve. As we were not looking at different teams
doing the same thing, we obviously did not find any examples of 'bast practice' which could
be disseminated across the organisation.
Conclusions
This paper has described an approach to process analysis and improvement based on
viewpoints where each viewpoint manages partial process knowledge. It allows inconsistent
models of processes to be managed and provides a framework for analysing the
inconsistencies with a view to subsequent process improvement.
The approach was developed as part of an industry-university collaboration and was carried
out in conjunction with other work concerned with human and social influences on process
reliability [26]. All the work has been strongly influenced by practical industrial
requirements. We do not require the use of specialised notations for process description and,
a novel feature of the approach is the use of concerns to relate process improvement and
analysis to overall business goals. The process viewpoints approach has been applied to the
case study described here and to some aspects of the requirements engineering process used
by another of our industrial collaborators in the project. These revealed process
inconsistencies and highlighted distinctions between the assumptions made by different
process participants.
The case study described here demonstrated that it was realistic to apply our approach to real
industrial processes although we must admit that this was a relatively small project with a
limited number of stakeholders. We learned that it was important to use only a small number
of viewpoints so that the elicitation process did not become too expensive and that concerns
were a very valuable way of ensuring that process information relevant to business goals was
not ignored. We do not yet know how the approach will scale up to the processes used in
large software engineering projects.
Our current work with the process viewpoints approach is not, in fact, concerned with
software processes but with more general business processes in the financial sector. We are
s
investigating how this approach to process analysis can be used in conjunction with
structured requirements engineering techniques to help understand an organisation's
requirements for business process support.
Acknowledgements
The work described in this paper was partially supported by the European Commission's
ESPRIT programme under project REAIMS (8649). Particular thanks are due to Robin
Bloomfield at Adelard for case study information.
--R
Software Process Modelling and Technology.
Trends in Software Process.
"Characterizing the Software Process"
"Capability Maturity Model, Version 1.1"
"Bootstrap: Fine Tuning Process Assessment"
"An Overview of SPICE's Model for Process Management"
A Good Practice Guide.
"Human, Social and Organisational Influences on the Software Process"
"Process Modelling Languages: One or Many"
Systems Thinking
"A Framework for Expressing the Relationships between Multiple Views in Requirements Specifications"
"Requirements engineering with viewpoints"
"Viewpoints: principles, problems and a practical approach to requirements engineering"
"Multi-View Modeling of Software Processes"
"A Systematic View-based Approach to Eliciting Process Models"
"Elicit: A Method for Eliciting Process Models"
"Emerging technologies that support a software process life cycle"
"Using ViewPoints for inconsistency management"
"Viewpoints for requirements elicitation: a practical approach"
"Process Viewpoints"
Systems Methodology in Action.
A Quantitative Approach to Software Management.
"PERE: Evaluation and Improvement of Dependable Processes"
--TR
--CTR
Steve Easterbrook , Marsha Chechik, 2nd international workshop on living with inconsistency, Proceedings of the 23rd International Conference on Software Engineering, p.749-750, May 12-19, 2001, Toronto, Ontario, Canada
Steve Easterbrook , Marsha Chechik, 2nd international workshop on living with inconsistency (IWLWI01), ACM SIGSOFT Software Engineering Notes, v.26 n.6, November 2001
Pete Sawyer , Paul Rayson , Roger Garside, REVERE: Support for Requirements Synthesis from Documents, Information Systems Frontiers, v.4 n.3, p.343-353, September 2002 | viewpoints;software process;process improvement |
330354 | Managing Requirements Inconsistency with Development Goal Monitors. | AbstractManaging the development of software requirements can be a complex and difficult task. The environment is often chaotic. As analysts and customers leave the project, they are replaced by others who drive development in new directions. As a result, inconsistencies arise. Newer requirements introduce inconsistencies with older requirements. The introduction of such requirements inconsistencies may violate stated goals of development. In this article, techniques are presented that manage requirements document inconsistency by managing inconsistencies that arise between requirement development goals and requirements development enactment. A specialized development model, called a requirements dialog meta-model, is presented. This meta-model defines a conceptual framework for dialog goal definition, monitoring, and in the case of goal failure, dialog goal reestablishment. The requirements dialog meta-model is supported in an automated multiuser World Wide Web environment, called DealScribe. An exploratory case study of its use is reported. This research supports the conclusions that: 1) an automated tool that supports the dialog meta-model can automate the monitoring and reestablishment of formal development goals, 2) development goal monitoring can be used to determine statements of a development dialog that fail to satisfy development goals, and development goal monitoring can be used to manage inconsistencies in a developing requirements document. The application of DealScribe demonstrates that a dialog meta-model can enable a powerful environment for managing development and document inconsistencies. | INTRODUCTION
Requirements engineering can be characterized as an iterative process of discovery and analysis
designed to produce an agreed-upon set of clear, complete, and consistent system requirements.
The process is complex and difficult to manage, involving the surfacing of stakeholder views,
developing shared understanding, and building consensus. A key challenge facing the analyst is
the management and analysis of a dynamic set of requirements as it evolves throughout this pro-
cess. Although a variety of techniques have been developed to support aspects of this process, support
of requirements monitoring has been lacking. In this article, we describe our requirements
dialog meta-model, and DEALSCRIBE tool, which were developed to address this need. A key feature
of DEALSCRIBE is its ability to monitor the state of requirements development and alert analysts
when policy violations arise during the development process.
A. Managing Requirements Dialog
Stakeholder dialog is a pillar of the requirements development process. Techniques have been
developed to facilitate dialog (e.g., JAD, prototyping, serial interviews) and to document and track
requirements as they evolve (e.g., CASE). A requirements dialog can be viewed as a series of conversations
among analysts, customers, and other stakeholders to develop a shared understanding
and agreement on the system requirements. Typically the analyst converses with the customers
about their needs; in turn, the analyst may raise questions about the requirements, which lead to
further conversations. Within the development team, analysts will also converse among themselves
about questions that arose during their analysis of the requirements-sometimes the result of
sophisticated analytic analysis; other times, the result of simply reading two different paragraphs
in the same requirements document.
Like many dialogs, requirements development can be difficult to manage. Empirical studies have
documented the difficulties and communication breakdowns that are frequently experienced by
project teams during requirements determination as group members acquire, share, and integrate
project-relevant knowledge[21][55]. Requirements or their analyses are forgotten. Different
requirements concerning the same objects arise at different times. Inconsistency, ambiguity, and
incompleteness are often the result.
The research described in this article focuses on a specific, critical issue in tracking and managing
the requirements dialog: the monitoring of requirements development goals of consistency. By
using a requirements dialog meta-model, as we describe, analysts can benefit from development
goal failures alerts which facilitate the development of a requirements document free from conflict
In this article, we describe our requirements dialog meta-model (- II). Automated support of the
dialog meta-model is presented in section III. Next, instantiations of operations which support the
analysis are presented (- IV). The final sections present our case study summaries (- V), observa-
tions(- VI), and conclusions(- VII). However, before we begin, the remainder of this introduction
motivates this research and places it in context.
A. Inconsistency: A Driver of Requirements Dialog
Inconsistency is a central driver of the requirements dialog. By understanding and monitoring
inconsistency, one can support the management of requirements inconsistency during develop-
ment. Two basic drivers of requirements inconsistency are: 1) technical and 2) social-political.
Technical drivers of requirements inconsistency generally arise from errors of description; they
include:
. Voluminous requirements The shear size of a requirements document set can lead to inconsis-
tency, such as varied used of terminology. This is especially true as the requirements are modi-
fied: one change request can lead to a cascade of other change requests until the requirements
reach a more consistent state.
. Complex requirements The complexity of the domain or software specification can make it
difficult to understand exactly what has been specified or how components interact.
. Changing requirements As the requirements document is developed, new requirements are
added, older ones are updated. As a result, the document is typically in a transitory state where
many semantic conflict exists, of which most are expected to be resolved simply by bring them
to the current state as (implicitly) understood by the analyst.
Social-political drivers of requirements inconsistency arise from differences in goals held by
various system stakeholders; they include:
. Changing and unidentified stakeholders Analyst report that the initial stakeholder set,
defined at project inception, changes as the project progresses. For example, analysts felt that
they could understand the system requirements when interacting with actual users; however,
such access was often difficult to come by[24]. Moreover, one department of an organization
may claim to be "the" customer; however, when it comes to the final purchase decision, it may
be another department[24]. Such organizational interactions can lead to drastic changes in the
requirements.
. Changing requirements In addition to the technical problem of tracking changed require-
ments, there is the social problem of informing stakeholders of the consequences of changes, as
well as managing stakeholders requests and their expectations of change.
. Changing analysts Over the life of the project, the composition of team members will change.
Consequently, requirements concepts and their expressions will vary over time.
Such drivers are similar to the factors found to be the cause of failures in many information sys-
tems[25]. That study concluded that many information system failures could be attributed to poor
analysis of important stakeholders. This has been supported in other MIS research[27][42]. Addi-
tionally, industry and market-oriented analysts have recognized a need to address multiple, often
hidden stakeholders, and their interacting requirements[24].
B. A Need to Support Analyst in Inconsistency Management
Requirements analysts need tools to assist them in reasoning about requirements. To some
degree, Computer Aided Software Engineering tools have been successful in providing support for
modeling and code generation[5][23][36]; however, they have been less successful in supporting
requirements analysis[23]. In fact, the downstream life-cycle successes of these tools may be one
of the reasons that systems analysts spend a greater percent of the time on requirements analysis
than ever before[17]. Thus, analysts will benefit from techniques and tools which directly address
requirements analysis.
A significant part of requirements analysis concerns the identification and resolution of requirements
faults. Faults include: incorrect facts, omissions, inconsistencies, and ambiguities [33].
Many current research projects are aimed at identifying such faults from requirements. These
include: model checkers, terminological consistency checkers, knowledge-based scenario check-
ers; additionally, more generic tools, such as simulation and visualization are available to requirements
analysts. For the most part, these tools are cousins of similar tools applied to programming
languages which check for syntactic errors or perform checks of programs inputs and path execu-
4tion. However, requirements faults are not been related back to the original stakeholders, nor has
there been much support for resolving such faults. Yet, there is still a belief that conflict identification
and resolution are key in systems development[25][42].
Empirical studies of software development projects have identified a need for issue tracking
tools[8][56]. Typical problems include: 1) unresolved issues that do not become obvious until integration
testing, and 2) a tendency for specific conflicts to remain unresolved for a period of time.
Inadequate tools for tracking issue status (e.g., conflicting, resolved) was identified as a great concern
to practicing system engineers.
C. Research Addressing Requirements Management
There is a growing literature on requirements inconsistency management. Fickas and Feather
proposed requirements monitoring to track the achievement of requirements during system execution
as part of an architecture to allow the dynamic reconfiguration of component software[14].
Feather has produced a working system, called FLEA, that allows one to monitor interesting
events defined in a requirements monitoring language[12]. Finkelstein has since illustrated how
the technique may be used to monitor process compliance[11]; for example, organizational compliance
to ISO 9000 or IEEE process descriptions[28]. Our work on dialog monitoring is derived
from these work, but also include an element of dialog structuring.
Two projects explicitly address requirements dialog structures. First, Chen and Nunamaker have
proposed a collaborative CASE environment, tailoring GroupSystems decision room software, to
requirements development[7]. Using C-CASE, one can track and develop requirements
consensus. Second, Potts et. al., have defined the Inquiry Cycle Model of development to instill
some order into requirements dialogs[40]. Requirements are developed in response to discussions
consisting of questions, answers, and assumptions. By tracking these types of dialog elements
(and their refinements), dialog is maintained, but inconsistency, ambiguity, and incompleteness are
kept in check through specific development operations and requirements analysis (e.g., scenario
analysis).
Workflow and process modeling provide some solutions for the management of requirements
development[50]. It is possible, for example, to generate a work environment from a hierarchical
multi-agent process specification[30]. There has been some attempt to incorporate such process
models into CASE tools[29]. However, these tools generally aid process enactment, through constraint
enforcement. However, as Leo Osterweil notes:
Experience in studying actual processes, and in attempting to define them, has convinced us that
much of the sequencing of tasks in processes consists of reactions to contingencies, both foreseen
and unexpected.[38]
In support of a reactionary approach, the dialog meta-model ecshews process enforcement and
supports the expression and monitoring of process goals.
There are a variety of other projects that indirectly address the management of requirements
inconsistency. These include: 1) an ontological approach, in which conflict surfacing is assisted by
providing a set of meaningful terms, or ontology, by which one can specify conflict relationships
between requirements[6][41][58]; 2) a methodological approach, in which the application of a
system development method surfaces conflicts-for example, CORE[31], ETHICS[32], Soft Systems
Method[4], ViewPoints[37], and CORA[43]; and 3) a technological approach in which a specific
technique, or automation, which can be used to surface requirements conflicts-for example,
conflict detection through a collaborative messaging environment[3][18][22], structure-based conflict
detection[51], scenario-based conflict surfacing[2][26][40], and conflict classifica-
A Dialog Meta-model 4
Managing Requirements Inconsistency with Development Goal Monitors GSU Working paper CIS 97-4
tion[10][20].
The dialog meta-model, by virtue of being a meta-model, is neutral to the above approaches. 1 To
use the dialog model, a methodology, conflict ontology, and automated techniques can be instantiated
as elements of the dialog model. For example, the Inquiry Cycle Model is defined by instantiating
the information subtypes of Requirement, Question, Answer, Reason, Decision, and
ChangeRequest, as specified in the Inquiry Cycle Model[40]. The dialog meta-model provides the
framework by which to instantiate such elements; its implementation in DEALSCRIBE, provides
some automation for the definition, execution, monitoring of the dialog,
D. Requirements Development Needs of a Dialog Meta-Model
The design of the requirements dialog meta-model, and it implementation in DEALSCRIBE, were
driven by the following requirements development needs:
. The need to represent multiple stakeholders requirements, even if initially conflicting.
. The need to identifying and understanding requirements interactions.
. The need to track and report on development issues.
. The need to support the dynamic, dialog-driven requirements development.
. The need to develop shared understanding and consensus through requirements analysis and
negotiation.
We will show how each of these needs can be supported by the requirements dialog meta-model,
and its implementation in DEALSCRIBE.
II. A DIALOG META-MODEL
To support experiments with the automated assistance of dialogs, we have defined a dialog meta-model
(DMM) as depicted in figure 1. There are three basic components of the meta-model:
. Statement Model
Statements are added to the dialog by the people, or agents, involved in the dialog. In the dialog
We choose the term dialog meta-model rather than the common term process model, due to our more specific modeling
of dialog processes and our use of the meta-model to define other models.
7LPH
'LDORJ#6WDWHPHQW#0RGHO
'LDORJ#*RDO#0RGHO
Fig 1. An illustration of the dialog meta-model.
A Dialog Meta-model 5
Managing Requirements Inconsistency with Development Goal Monitors GSU Working paper CIS 97-4
statement model, there are two subtypes of the statement hierarchy:
. Information. A passive statement which adds new information to the dialog by reference, or
copying, some external information source.
. Operation. An active statement which adds new information derived through some
computation based on some, or all of, the prior dialog statements.
. Statement History
The dialog statement history is simply the recorded set of statements which are part of a particular
dialog at some point in time. When statements are created, they are said to be asserted into
the statement history.
. Goal Model
The dialog goal model is a declarative prescription of the "dialog rules", indicating such things
as the relative order of statements, as well as their content. Examples of dialog goal models
include: Roberts Rules of Order, and the software development life-cycle. Enforcement of the
dialog goal model may be carried out through statement pre-conditions which restrict the addition
of statements to the dialog. Conversely, statements may be unrestricted, but operations can
analyze the statement history to determine the degree of compliance to a dialog goal model. In
either case, an information or operation statement (sub)type is said to support a specific dialog
goal if: (a) its pre-condition maintains the goal, or (b) its operation (partially) determines the
state of the goal.
The dialog meta-model regards a dialog as a stream of statements which fall into passive information
and active operations and have some correspondence to the dialog goal model. This kind of
meta-model has proven to be quite useful. For example, the meta-model can be refined to define a
typical process model, with a distinction of process and product. First, consider each information
statement to be a product. Second, consider basic operation statements to be actions within a process
model. Third, consider the dialog goal model as the explicit definition of a process model. In
fact, the dialog meta-model is a process model with an explicit representation of the process goals
and enactment history. As such, we find the dialog meta-model to be suitable for modeling
requirements development.
In this article, use of an adaption of the CORA meta-model as it is supported within the dialog
meta-model. The aim of the Conflict-Oriented Requirements Analysis (CORA) meta-model is to
provide representations and operations useful in analyzing and resolving requirements inconsis-
tencies[43]. The basic entities include: Requirement, Interaction, and Transformation. Using these
entities, and their subtypes, one can represent requirements inconsistencies (as interactions) and
resolve them through the application of transformations.
To address the management of requirements inconsistencies, we have adapted CORA's original
model to include entities useful in a dialog style of development. These include basic entity refine-
ments, such as the new subtypes of Requirement: InformalRequirement, StructuredRequirement. Addi-
tionally, we have added information "mark up" subtypes, including: Note, Question, and Request.
These new statement types aid analysts in their dialog about the requirements, and well as their
development of the requirements. Finally, we have added new operation statements which provide
feedback on the current state of requirements; section IV presents these operations. The application
and monitoring of these operations provide a key capability for managing requirements inconsistency
III. TOOL SUPPORT FOR THE DIALOG META-MODEL
We have developed a tool, called DEALSCRIBE, which supports digital interactions which can be
characterized using the dialog meta-model. 2 DEALSCRIBE was created by building upon two existing
tools: HyperNews and ConceptBase.
. HyperNews provides a discussion system similar to Usenet News, but it has a World Wide Web
interface. In each forum a user can post typed text messages. A message may be posted to the
forum, or in response to a particular message. A WWW view of the forum can provide an overview
of the discussion, where messages are laid out in an tree format that shows replies to a
message indented under it (see figure 3). HyperNews provides: various views of a forum, user
notification of new responses, an email interface, security, and administrative functions.
. ConceptBase is a deductive database which provides a concurrent multi-user access to O-Telos
objects[19]. All classes, meta classes, instances, attributes, rules, constraints, and queries are
2 DEALSCRIBE is a member in the DEALMAKER suite of tools aimed at assisting collaboration through negotiation in
requirements analysis[47] and electronic commerce[45].
Fig 2. A portion of a DEALSCRIBE WWW page is shown on the left. Each named radio button indicates a
statement type. On the right, a portion of the ConceptBase database is shows the corresponding statement
types (as viewed from ConceptBase's graphical browser). Due to space limitations, the OperationType
hierarchy was not expanded in the ConceptBase pane.
uniformly represented as objects. ConceptBase itself operates as a server, while clients, such as
ConceptBase's graphical browser communicate via internet protocols. ConceptBase has shown
to be a powerful tool for systems development, partly because of its ability to simultaneously
represent and query, instances, classes, and meta-classes[16][34].
In building DEALSCRIBE, we used ConceptBase to define the dialog meta-model and refinements,
such as our adaptation of the CORA meta-model. ConceptBase also stores the dialog history as
instances of a DMM. The actual interface to the dialog history is managed by an adaptation of
HyperNews. It generates statement input and output forms from the definitions of the DMM stored
in ConceptBase. Thus, DEALSCRIBE statements can be simple text (as in HyperNews), input forms
of typed attributes, or even the result of an operation (e.g., program, or ConceptBase query).
Figure
shows screen portions of DEALSCRIBE and ConceptBase's graphical browser.
DEALSCRIBE "Add Message" button types are defined from the corresponding ConceptBase defi-
nitions. As statements are added to a particular dialog, they are asserted into the ConceptBase as
instances of the statement types shown in the figure.
A. Defining a Dialog Goal Model
To define the "rules of the dialog", an analyst specifies a set of logical conditions, called dialog
goals, about dialog statements. A dialog goal defines desired properties of statements, or their
interconnections, possibly over time. For example, consider the goal of having all requirements
have a defined user priority. (This could be used to support standard PSS05, which specifies that
under incremental development, all requirements will have a user defined priority[28].) The following
ConceptBase definition specifies the HasUserPriority goal.
The above definition specifies HasUserPriority as a DialogGoal. The goal is defined as a query
about requirements. When ran as a query, it will retrieve all requirements which have a userPriority
attribute which is filled is any value of type Priority. (In the ConceptBase query notation, "this"
refers to the instance retrieved from the database before the constraint is applied; in this case, a
Requirement.)
Complex dialog goals can be created through the constraint language provided by ConceptBase.
For example, consider the case where requirements have an associated degree of inconsistency,
called contention. We may want to resolve interactions among the most contentious requirements
QueryClass HasUserPriority in DialogModel isA Requirement, DialogGoal with
constraint
first. We can specify such a goal as follows:
The above dialog goal definition, ResolveHighestContensionFirst, makes use of a derived class,
MostContentiousUnresolvedRequirements. This class is defined to be those requirements: 1) for
which there has not been a resolution generated, and 2) there does not exist another requirement
with a higher contention for which there has not been a resolution generated. Once MostConten-
tiousUnresolvedRequirements is defined, specifying the goal ResolveHighestContensionFirst is easy. It
is simply those requirements that are both: 1) in the MostContentiousUnresolvedRequirements, and 2)
interact with each other, as denoted by both being in the requirements of the same RequirementInter-
action. Thus, ResolveHighestContensionFirst make use of the statement history (as captured in Con-
ceptBase) to specify the goal of always selecting unresolved interactions among requirements with
the highest contention.
B. Checking the Dialog Goal Model
The dialog goal model can be used to automatically check the statement history for compliance.
The dialog goal model consists of a set of goals as specified above. To check compliance, statements
need to be compared against the constraints expressed in the goal. Two types of goal modes
can be checked: 1) has the goal been achieved, and 2) has the goal been violated. Failures of either
type are called, a goal failure. The first is checked by simply running the goal query. The second is
checked by finding statements in the statement history which do not meet a goal's constraints. As
shown above in the definition of ResolveHighestContensionFirst, the modes of checking can be specified
using checkModes. (Typically, goal violations are of greater concern.) A goal violation query
can be automatically constructed by negating a goal's constraint. 3 Such a query can be defined as
indicated below:
Given a goal G, the above shows how one can construct a violation query which is of the same
3 In DEALSCRIBE, violation queries are automatically defined as part of initialization after the dialog model is loaded.
However, if a goal constraint is null, then a violation query is not defined because the resulting query would be the same
as the goal.
Class MostContentiousUnresolvedRequirements isA StructuredRequirement with
constraint
exists gr1/GenerateResolution (gr1 requirements this) and
exists thisCon/Integer (this Contention thisCon) and
not exists otherReq/StructuredRequirement otherCon/Integer
Contention otherCon) and
(otherCon > thisCon) and
not exists gr2/GenerateResolution (gr2 requirements otherReq))
QueryClass ResolveHighestContentionFirst in DialogGoal isA RequirementInteraction with
checkModes
violation : Viloation
constraint
exists req1,req2/MostContentiousUnresolvedRequirements
((this requirements r1) and (this requirements r2))
QueryClass CheckGoalViolation_G isA <class-list> , GoalViolationCheck with
constraint
not (this in G) $
types as specified in goal G (i.e., if G isA Requirement, then CheckGoalViolation_G isA Requirement).
However, the constraint indicates that the query should return those instances which do not meet
the constraints in the goal G.
Violation checking queries, as define above, can not only determines if a goal is met, but which
statements fail the constraint. It is possible to place a goal's constraints into the constraints of
statement definitions. Such integrity constraints would ensure that the dialog goal model is maintained
at all times by rejecting statements that do not conform. (DEALSCRIBE allows this.) How-
ever, when a statement assertion failed, it would not be possible to determine which of multiple
goals the statement violated-when using most database technology. Moreover, no deviations
from the dialog goal model would be allowed. So, to enable a more flexible administration of dialog
goal models, DEALSCRIBE runs goal checking queries to determine dialog compliance.
C. Defining Statements
Checking the statement history for dialog compliance can itself be considered a dialog opera-
tions. In fact, defining the dialog goal or statement models can also be considered dialog opera-
tion. Currently, DEALSCRIBE is not used to define the dialog goal or statement model. Instead,
models are defined outside of DEALSCRIBE (using a text editor and ConceptBase tools).
Information statements are simply defined as ConceptBase objects. For example, a Structured
Requirement with a perspective, mode, and description, could be defined as follows:
From this definition, DEALSCRIBE generates an input form. A user can then fills in, or select, values
for attributes of the object. Operation statements are similarly defined. For example, RunAnaly-
sis is (partially) defined as follows:
Like information statements, the object attributes of operation statements may serve as input
fields; however, some may serve as output. All operation statements have an associated (Perl) subroutine
which is called. After a user fills in the input attributes, statement assertion begins. The
subroutine associated with the statement type is executed. It carries out the operation (typically a
ConceptBase query) and fills in the output attributes of the object and the statement is asserted. In
the above RunAnalysis, the program executes the selected queries and places the returned objects in
the result attribute.
D. Defining Monitors
A monitor can be used to continually check dialog compliance against a dialog goal model. In
fact, in DEALSCRIBE, any operation statement can be used to monitor the statement history. To do
so, 1) a user asserts an operation statement, S 1 , then 2) a user asserts a StartMonitor statement as a
response to S 1 . The original assertion of S 1 allows for the input parameters of S 1 to be filled in;
Class StructuredRequirement isA Requirement, InformationStatement with
attribute
perspective
description : String
Class RunAnalysis isA OperationStatement with
attribute
query
optionally, the operation S 1 may execute and assert its results. The assertion of the StartMonitor
defines the conditions under which operation S 1 will be invoked. DEALSCRIBE will run the opera-
tion, according to the monitor parameters, until a StopMonitor is asserted for S 1 . The statement his-
tory, as depicted by DEALSCRIBE in figure 3, indicates: 1) the initial assertion of ModelCheck, 2)
the subsequent StartMonitor, 3) the subsequent monitor results, and finally, 4) the StopMonitor state-
ment. Thus, monitoring is divided into two parts: 1) the condition under which the operation will
be invoked, and 2) the operation itself. Additionally, the operation may have its own conditions
which must be met before results are asserted.
The definition of a monitor specifies under what conditions an operation will be invoked. Com-
monly, a monitor specifies that an operation shall be invoked after every transaction. In the case of
monitoring a goal, this will ensure that a goal violation is immediately detected. However, some
operations may be computationally expensive, in either checking applicability conditions or
asserting results. In such cases, the monitor can be used to more selectively invoke the operation.
Monitors may be run periodically; for example, modulo the statement history count, or chronological
time. They may assert new statements every time they are activated, only when the have
results, or only when their results are new.
DEALSCRIBE's use of monitoring can be quite useful. First, basic operations can be automatically
run selectively. In addition to simply keeping analysis current, this can include automated synthe-
sis. For example, if a resolution procedure were defined, it could be activated to assert resolution
alternatives each time an inconsistency were asserted. Second, goal models can be monitored to
alert (or remedy) when compliance lapses. Finally, the goal model itself can include the use of
monitoring. For example, it can be specified that requirement contention should be monitored
Fig 3. A portion of a DEALSCRIBE WWW page showing statement headings: <number, icon, title, author
date>. The initial CheckModel statement is at the top, followed by a StartMonitor response, and the subsequent
monitored responses of three types: CheckModel, RootRequirementsAnalysis, and StopMonitor. The
final StopMonitor response ends monitoring of CheckModel. (Responses are shown indented, below, and
with the newer statements toward the top.)
periodically, as illustrated below:
The definition of TransactionMonitorContention indicates that it is a StartMonitor operation. The
check mode is set to Achievement, indicating that a lack of application of StartMonitor, fulfilling the
associated constraints, will result in a monitor invoked operation. The constraints indicate that a
StartMonitor statement should be asserted which monitors RootRequirementsAnalysis (an operation),
with a transaction period of less than 30-and a StopMonitor for RootRequirementsAnalysis should
not have been asserted.
Monitoring of "monitor goals" is accomplished like all monitoring. Consider, for example, monitoring
of TransactionMonitorContention. First, an operation statement which analyzes goals achievement
and violation, GoalCheck, must be asserted with TransactionMonitorContention as an input.
Next, a StartMonitor response to the GoalCheck must be asserted. Whenever the monitor's condition
holds, GoalCheck will be invoked. If TransactionMonitorContention is not achieved, then a GoalCheck
statement, indicating the failure, will be asserted. Thus, monitoring itself can be monitored as part
of the dialog goal model.
IV. MANAGING INCONSISTENCY
We have developed and applied techniques aimed at assisting the management of requirements
inconsistencies. These techniques fall into two approaches based on their basic objective:
inconsistency understanding and 2) inconsistency removal. To aid inconsistency understanding,
we have developed Root Requirements Analysis[49]. This technique uncovers requirements
inconsistencies, analyzes the inconsistencies as a group, and directs analysis to key requirement
conflicts. It addresses inconsistency in the sense of requirements incompatibility or requirements
conflict. Such conflicts should be resolved prior to construction-even if the resolution is to use an
interactive resolver as part of the run-time system. Our second basic approach, called Requirements
Restructuring, generates resolutions to requirement conflicts[43][47].
In keeping with the theme of this article, this section shows how Root Requirements Analysis
can be incorporated into DEALSCRIBE to assist the management of requirements inconsistency.
Root Requirements Analysis is summarized and a DEALSCRIBE dialog goal model for it is defined
(-A). Next, we briefly indicates how other related requirements analysis techniques and Requirements
Restructuring can also be applied within DEALSCRIBE (-B). These techniques are then illustrated
in the following section V, "Case Studies".
A. Root Requirements Analysis
Two objectives of Root Requirements Analysis are: (1) understanding the relationships among
the requirements, and (2) ordering requirements by their degree of conflicting relationships. This
information can be used to guide other analyses, such as Requirements Restructuring.
The overall procedure of Root Requirements Analysis is:
QueryClass TransactionMonitorContention in DialogGoal isA StartMonitor with
checkModes
achievement : Achievement
constraint
exists thisOp/RootRequirementAnalysis (this statement thisOp) and
exists tranPeriod/Integer (this TransactionInterval tranPeriod) and (tranPeriod < 30) and
not exists stopMon/StopMonitor (stopMon statement thisOp)
root requirements that cover all other requirements in the requirement document
interactions among root requirements
Analyze the root requirement interactions
More specifically: (1) requirements are (manually) generalized to derive root requirements, (2)
root requirements are (manually) pairwise compared to derive root requirements interactions, and
(3) requirements metrics are automatically derived from the root requirements interactions. This
technique is important in that it provides a systematic method by which requirements conflicts can
be surfaced and then systematically selected for efficient resolution. The following subsections
summarize each step.
Root Requirements
The objective of root requirement identification is to determine key requirements whose interaction
analysis leads to the discovery of significant requirements relationships. While one could
exhaustively compare every requirement with every other, in practice, such analysis is not feasible
for non-trivial requirements documents. Instead, we seek to identify root requirements which represent
key concepts from which other requirements are derived through elaboration. While the
binary comparison of such root requirements will not uncover every requirement relationship, it
will narrow analysis to key requirements to which further analysis can be applied.
The overall procedure of identifying root requirements is as follows: (1) group requirements into
sets by the concepts they reference, (2) order requirements by generality, (3) generate or select the
most general requirements for each concept, and (4) repeat steps 1-3 until concept generalizations
are not meaningful. 4 The resulting requirements are the root requirements. While it is desirable
that the root requirements be a minimal set which cover all other requirements through some set of
development relationships, such as elaboration, it is not necessary. In our application, root requirement
identification was an informal process aimed at identifying key requirements from which key
analyses can be derived.
Root Requirements Interactions
As Peter G. Neumann notes in his book on Computer Related Risks,
"The satisfaction of a single requirement is difficult enough, but the simultaneous and continued satisfaction
of diverse and possibly conflicting requirements is typically much more difficult." - Peter G.
Neumann[35].
The objective of identifying root requirements interactions is to surface any such conflicts which
can lead to failures during development or operation of the system. For example, individually two
requirements may be achieved on a single processor, but simultaneously achieving both can lead
to processor thrashing and the achievement of neither. More generally, a requirement may: (1)
deplete a shared resource, (2) remove a pre-condition of another requirement, (3) remove the
achieved effect of another requirement, or have other interfering actions. We refer to such negative
interactions between requirements, as a requirements conflict.
To root identify requirements interactions, each root requirement is exhaustively compared with
every other root requirement. For every binary comparison, an analyst subjectively specifies: 1)
the relationship type, 2) probability of conflict occurrence during system operation. While such
relationships are both subjective and approximate, they have provided a good characterization
requirements relationships. In general, such subjective relationships are commonplace among
4 Note that if requirement generalization is not a selective process, then a single requirement (e.g., Thing) would result.
Thus, we apply generalization only when we subjectively deem it conceptually meaningful.
informal requirement techniques[22][39], as well as some formal techniques[6].
type consists of five qualitative descriptors indicating how two requirements are
related to each other; the types are: Very Conflicting, Conflicting, Neutral, Supporting, and Very Support-
ing. While such requirement interrelationships can be defined formally[1][6] and even automatically
derived from formal requirements[9][44], currently, Root Requirements Analysis relies on an
subjective determination.
Conflict potential is the subjective assessment of the probability that the requirements conflict
will occur in the running system. Consider two requirements, R 1 and R 2 . If one-third of all system
executions result in the achievement of both R 1 and R 2 and the other two-thirds results in a system
failure, then the probability of conflict occurrence is two-thirds.
Analyzing Root Requirements Interactions
Once the root requirements interactions are identified, they can be used to derive useful metrics.
Three that are particularly helpful are: relationship count, requirement contention, and average
potential conflict. Relationship count is simply a count, for all root requirements, of the number of
interactions a root requirement has with other root requirements, for each of the five types of rela-
tionships. A completely independent root requirement will have n-1 Neutral relationships, for n
root requirements. More typically, a root requirements has a mix of conflicting, neutral, and supporting
relationships. Requirement contention is the percentage of all relationships the requirement
participates in which are conflicting; thus, if a requirement's contention is 1, then it conflicts
with every other requirement in the requirements document. Finally, average potential conflict is
the conflict potential of a requirement averaged across all of its conflicting relationships.
While other metrics can be derived, we have found relationship count, requirement contention,
and average potential conflict to be the most useful. Using these simple computations, the requirements
can be rank ordered to guide their efficient resolution. For example, we have found that
resolving the most contentious requirement first not only directly resolves one conflict, but often it
indirectly resolves others[49]. Thus, resolving high contention requirements first is one of our dialog
goals for the Root Requirements Analysis (see ResolveHighestContensionFirst in -II.A).
A Root Requirements Dialog Goal Model
A dialog goal model can be defined for a technique, such as defined in Root Requirements Anal-
ysis. The goal model indicates the desired characteristics of processes and products which occur as
the statement history is constructed. As such, it can be construed as a methodology prescription for
the application of the technique. However, the following Root Requirements Analysis dialog goals
are only a part of a methodology. Yet, such partial models are appropriate for the monitoring style
of compliance analysis.
The following are four dialog goals for Root Requirements Analysis:
DeriveRoots
Do not have more than 20 requirements which do not have an associated root requirement.
DeriveInteractions
Do not have more than 10 root requirements which do not have an associated interaction.
DeriveContention
Do not have more than 3 new interactions which do not have an associated requirements contention
analysis. If so, do contention analysis.
ResolveHighestContensionFirst
Resolve requirements inconsistencies with highest contention first.
Each of the above goals depends on the previous goal in the sequence. The last goal, ResolveHigh-
estContensionFirst, is a follow-up of the first three basic goals of RRA; it was defined in section
II.A. The first three goals are actually simpler to defined than ResolveHighestContensionFirst; how-
ever, due to the lack of arithmetic in ConceptBase, their definition is slightly baroque.
The first goal simply states the root analysis should take place before too many (20) requirements
are defined. The definition would simply involve, count RequirementsWithNoRoot, where
RequirementsWithNoRoot indicates those requirements which have not been analyzed. However,
using DEALSCRIBE, we must introduce an intermediate ConceptBase "counting goal" which is
appropriately interpreted by DEALSCRIBE. Doing so, leads to the following definition:
In the above definitions, RequirementsWithNoRoot finds those requirements for which their is no
corresponding root requirement. The query, CountGoal, is a special parameterized query whose
results are interpreted by DEALSCRIBE. The goal, DeriveRoots, fills in the parameters of CountGoal.
The net result is that requirements without associated roots are counted. If the count is less than
20, the goal is achieved; otherwise, it is violated. If monitored, DEALSCRIBE will assert a Check-
Goal monitor message (according to the parameters of the monitor) should the goal become vio-
lated. The definition of DeriveInteractions is quite similar.
The definition of DeriveContetion again takes a similar form, as shown below.
However, there is one additional attribute that is part of the goal. The violationRemedy attribute indicates
an operation statement that should be invoked automatically if a violation is observed as part
of monitoring. Upon violation, a remedy operation is passed the dialog goal and results of the violation
checking query. In the case of the RootRequirementsAnalysis, it simply ignores the input,
updates the contention attribute of all requirements, and as executes asserts its RootRequirement-
sAnalysis statement (see figure 3).
Finally, a Root Requirements model can be defined to consist of the above four goals as illus-
QueryClass RequirementsWithNoRoot isA Requirement with
constraint
exists r/RootRequirement (r requirements this)
GenericQueryClass CountGoal isA DialogGoal with
parameter
query
DialogGoal DeriveRoots isA CountGoal[RequirementsWithNoRoot/query, 20/count, Lesser/compare]
checkModes
violation : Violation
QueryClass InteractionWithNoAnalysis isA Interaction with
constraint
exists req/Requirement (this requirementsr eq) and not exists con/Integer (req contention con)
DialogGoal DeriveContetion isA CountGoal[InteractionWithNoAnalysis:query, 3:count, Lesser/compare] l
checkModes
violation : Violation
violationRemedy
RRA: RootRequirementsAnalysis
trated below.
Once so defined, this goal model may be selected as part of the input to run the ModelCheck opera-
tion. Thus, multiple goal models can co-exist and can be monitored at different times.
B. Other Requirements Analyses and Operations
As illustrated through this section, the dialog meta-model, as supported in DEALSCRIBE, provides
a convenient means of experimenting with monitoring of requirements development goals.
Goals are expressed as logical formula over the assertion of the information and operation statements
into the statement history. Once the O-Telos logic is understood, it is relatively simple to
define new statements, goals, and monitors. For example, to incorporate aspects of the PSS05
standard in a working goal model, the following was done[28]:
userPriority as an attribute to the Requirement information statement type.
LimitEmptyPriorities as a dialog goal which seeks limit the percentage of requirements
without a user priority to less than 20 percent. Its definition is similar to that of DeriveRoots,
but it uses a goal type that computes percentages.
Based on such small steps, we have found it relatively simple to experiment with different goal
models.
The monitoring facility of DEALSCRIBE provides a means to incorporate active monitors. Such
monitors do more than signal that a goal has been violated. As illustrated in the DeriveContetion
goal, active monitors can initiate operations. For example, a DeriveResolution goal can be defined
which, upon violation of a goal of consistency, invokes a resolution generation operation which
asserts alternative resolutions for an inconsistency. Such monitors, judiciously asserting suggestions
in the background, may provide a means to automated development where analyst are
opposed to more direct assistance.
V. CASE STUDIES
Case studies have been conducted to assess the utility of the DEALSCRIBE implementation of the
dialog meta-model. Specifically, Root Requirements Analysis (-IV.A) was applied to one case
both without (-B) and with (-C) the use of DEALSCRIBE. Thus, the case studies help assess the
utility of DEALSCRIBE, as well as the dependence of Root Requirements Analysis on a particular
tool set. But, before discussing the case studies, the distributed meeting scheduler case is summarized
A. Requirements for a Distributed Meeting Scheduler
To assess requirements analysis techniques and their tool support, we have repeatedly analyzed
the case of the distributed meeting scheduler requirements. The meeting scheduler case is useful
because of: (1) the complex requirements interactions which, depending on how they are
addressed, lead to considerable variation in the resulting implementations; (2) the availability of a
DialogModel RRA_GoalModel with
goals
ResolveHighestContentionFirst
widely circulated compact, yet rich, requirements document[53]; and (3) the publication of prior
analysis of the case[40][52]-including our own[43][44][47]. Hence, this case allows us, and oth-
ers, to compare analyses[13].
The general problem of the meeting scheduler can be summarized by the introduction to the
The purpose of a meeting scheduler is to support the organization of meetings-that is, to deter-
mine, for each meeting request, a meeting date and location so that most of the intended participants
will effectively participate. The meeting date and location should thus be as convenient as possible to
all participants. Information about the meeting should also be made available as early as possible to
all potential participants.
The remaining requirements of the four page description refine the roles of the meeting scheduler
and participants.
B. Root Requirements Analysis of Inquiry Cycle Documents
To assess the utility of Root Requirements Analysis, we applied the method to an established
requirements engineering case, that of the distributed meeting scheduler. The objective of the case-study
was to assess two hypotheses: (1) could Root Requirements Analysis be easily incorporated
into an existing methodology?, and (2) could Root Requirements Analysis add value by uncovering
requirement relationships? Fortunately, we obtained access to analysis documents generated
during an the Potts et. al. application of the Inquiry Cycle to the distributed meeting scheduler
problem[40]. By applying Root Requirements Analysis to the Inquiry Cycle discussion docu-
ments, we were able to assess both hypotheses.
Given the Inquiry Cycle analysis, we considered two ways to apply Root Requirements Analy-
sis. First, the original requirements could analyzed; such a case-study would result in a direct comparison
between the Inquiry Cycle and Root Requirements Analysis. Second, the requirements
discussion of the Inquiry Cycle could be analyzed. For the initial study, we choose the second
approach, as it provided an illustration of how Root Requirements Analysis could augment
another method[49]. However, the subsequent case study included both analyses within
DEALSCRIBE.
Figure
4 illustrates the result of applying the Inquiry Cycle model to the distributed the meeting
scheduler. The case produced 33 questions of the original requirements; 40 answers to those ques-
tions; 38 changes to the requirements; and reasons for the changes that were made.
Root Requirements Analysis was conducted using processor and a spreadsheet program. As
illustrated in figure 4, it led to the discovery of 23 very conflicting and 49 conflicting relationships.
The basic relationships for each root requirement are shown as a percentage of all relationships in
figure 5.
Root Requirements Analysis was useful in managing requirements interactions. As described in
section IV.A, the relationship count, requirement contention, and average potential conflict can be
used to determine which requirement conflict should be resolved first. In particular, we have found
it beneficial to resolve the most contentious requirement first. Thus, figure 5 shows that R 8 and R 3
are among the most contentious of all root requirements. However, of those that directly interact
with each other (see ResolveHighestContensionFirst in -III.A) R 8 and R 13 have the highest conten-
tion. Thus, their interaction was considered first as part of conflict resolution.
C. Assisted Root Requirements Analysis
To assess the utility of DEALSCRIBE, we applied the same Root Requirements Analysis to the
same requirements. The objective of this second case-study was to determine: 1) if DEALSCRIBE
could automate the basic metric analyses of Root Requirements Analysis, and 2) if dialog goal
monitoring would be useful. The first objective was readily affirmed-DEALSCRIBE can automate
Root Requirements Analysis. The second objective is more subjective and will require empirical
studies. However, based on our use of DEALSCRIBE, we found goal monitoring to be of considerable
assistance in managing requirements development.
The automation study duplicated the manual study, but with the use of DEALSCRIBE. First, monitoring
of the Root Requirements Analysis dialog goal model was started (-IV.A). Second, the
Fig 4. Results of applying Root Requirements Analysis to the Inquiry Cycle discussion. In the Inquiry
Cycle, ovals indicate the number of unique instances of a type, while arcs indicate the flow of analysis
within the Inquiry Cycle. The Root Requirements and graph of relationship counts by type was created
from Root Requirements Analysis.
"++"
"-"
"-"
R-
RR-
A
R-
RR-
A
R-
R-A
RR-C
-D
A
RR-
A
RR-D
R
R-C
F
RRRFig 5. Graph of root requirements interactions. The percentage that each requirement participates with all
other requirements for five relationship types (Very Conflicting, Conflicting, Neutral, Supporting, and Very Sup-
porting) are presented in an additive "stacked" graph; ordered by increasingly negative interactions.
Observations
Managing Requirements Inconsistency with Development Goal Monitors GSU Working paper CIS 97-4
previously identified text requirements of the inquiry cycle discussion, root requirements, and
interactions were automatically parsed and asserted into DEALSCRIBE. As statement were asserted
into DEALSCRIBE, goal failures were recognized and new information and remedies were automatically
asserted (see figure 3).
The Root Requirements Analysis results were the same as figure 4-as expected. However, the
analyses were automatically asserted in response to goal failures which incrementally occurred
during input. Thus, DEALSCRIBE maintained the Root Requirements Analysis metrics, including
the causes of goal failure.
Another case study using DEALSCRIBE was conducted. Rather than analyze the requirements
derived from the Inquiry Cycle discussion, as was done previously, the original meeting scheduler
requirements were analyzed. As is turns out, both documents have some common root require-
ments. Consequently, DEALSCRIBE could quickly derive the rather surprising results illustrated in
figure 6.
What is surprising is that analyzing the original 53 requirements uncovered nearly as many conflicts
as analyzing the Inquiry Cycle discussion. A priori, we hypothesized that, as a record of
stakeholder interaction, the Inquiry Cycle discussion would be richer in information-especially
conflicting requirements. As it turned out, the root requirements were nearly evenly distributed
across three sets: 10 from the Inquiry Cycle discussion, 11 from the original requirements, and 20
in both. Similarly, the conflicting interactions were nearly evenly distributed across three sets:
from the Inquiry Cycle discussion, 11 from the original requirements, and 55 in both; additionally,
there were 8 derived from interactions between the original and Inquiry Cycle roots.
VI. OBSERVATIONS
From the case studies, we observe that Root Requirements Analysis in useful, independent of
tool support. However, automated monitoring of the Root Requirements Analysis development
goals can significantly clarify the development status and reduce the effort analysts.
A. Root Requirements Analysis
Root Requirements Analysis has been valuable in managing the development of requirements.
The technique can be applied to requirements irrespective of their form or refinement. It provides:
. an ordering of the most conflicting and interacting requirements
. requirements dependencies across the whole system
. summary information that is easily understandable through tables and graphs.
The information that Root Requirements Analysis provides insight as to where the requirements
development effort should be applied. For example, if one seeks to: 1) reduce the overall number
IC Discussion
OriginalReqs
Both
IC/Orignal
IC Discussion
OriginalReqs
Both
Requirement Conflicts
Requirement Roots
Fig 6. A comparison of applying Root Requirements Analysis to: the a) Inquiry Cycle (IC) discussion, and
the b) original requirements, showing the relative number of root requirements and requirement conflicts.
of requirements conflicts (i.e., seek monotonically decreasing contention), and 2) reduce the number
of prior resolutions that must be reconsidered (i.e., seek minimize resolution backtracking),
then one should resolve the most contentious interacting requirements first. Root Requirements
Analysis can find such requirements by ordering conflict interactions by the degree of requirements
contention.
Root Requirements Analysis can provide a high level understanding of the requirements interac-
tions. Through the root identification process of generalization and the subsequent interaction
identification, higher-level interaction patterns and issues emerged. For example, in the meeting
scheduler, it became apparent that many root requirements had interactions concerning: 1) accurate
meeting planning data and 2) the need to complete the meeting scheduling process in a timely
way. Issues at this level of abstraction can be brought to the stakeholders for discussion and nego-
tiation, providing the analysts with guidance about relative priorities that can be used in the conflict
resolution process. Working on issues at this higher level requires significantly less time than
reviewing each individual conflict, and promotes consistent decision-making throughout the conflict
resolution process.
B. DealScribe's Dialog Modeling
DEALSCRIBE can be valuable in managing the development of requirements. DEALSCRIBE provides
. modeling of informational and operational statements
. modeling of dialog goals
. monitoring of goal failures
. monitored analyses
. concurrent multi-user WWW interface to the dialog
DEALSCRIBE can manage dialogs where asserted statements can be represented as hierarchies of
informational and operational statements.
Applying DEALSCRIBE to the meeting scheduler case helped the analysts gain a clearer understand
of the requirements state and helped to focus development. Once the Root Requirements dialog
model was defined, the root analyses (metrics) were automatically derived by DEALSCRIBE.
Thus, rederiving the original analysis in DEALSCRIBE took essentially no effort. In the original
manual Root Requirements Analysis, analysts had to coordinate their work and refer to a common
spreadsheet to prevent a duplication of effort. In contrast, DEALSCRIBE's dialog view and dialog
model monitoring provided a development overview which facilitated multi-user coordination.
Finally, the meta-modeling supported by DEALSCRIBE facilitated a continual refinement of the
Root Requirements dialog model; for example, after an update of the model, DEALSCRIBE derived
a new WWW dialog interface.
VII. CONCLUSIONS
Applying the dialog meta-model, as implemented in DEALSCRIBE, has demonstrated the utility
of actively monitoring development goals-specifically, goals of reducing requirements inconsis-
tency. The success of the dialog meta-model can be partly attributed to the simple WWW interface
and various dialog views provided by DEALSCRIBE. However, the goal-based monitoring of the
dialog is the key feature. By activating dialog monitors, analysts can be assured that they will be
alerted if their process or product goals fail. Moreover, if specified as a goal, remedies can be automatically
applied to goal failures. Such active assessment of development goals helps to overcome
the chaos that emerges from the dynamic environment of multi-stakeholder analysis and volumi-
nous, complex, and changing requirements.
ACKNOWLEDGMENT
We gratefully acknowledge the help and cooperation of Drs. Colin Potts and Annie Ant-n for
providing documentation of their Inquiry Cycle analysis of the meeting scheduler. We also thank
Dr. Martin Feather for providing a tutorial on his goal monitoring system, FLEA, which greatly
assisted our construction of DEALSCRIBE. Finally, we thank Georgia State University and the College
of Business for funding portions of this research.
--R
Viewing Specification Design as a Planning Problem: A Proposed Perspective Shift
Systems Thinking
CASE: Reliability Engineering for Information Systems
Using Non-functional Requirements to Systematically Support Change
The architecture and design of a collaborative environment for systems definition
A field study of the software design process for large systems
A Qualitative Modeling Tool for Specification Criticism
Domain modeling with hierarchies of alternative viewpoints
"Standards Compliant Software Development"
Requirements and Specification Exemplars
Requirements Monitoring in Dynamic Environments
An Overview of Workflow Management: From Process Modeling to Infrastructure for Automation
On formal requirements modeling languages: RML revisited.
The Changing Roles of the Systems Analyst
Improving Communication and Decision Making within Quality Function Deployment
Supporting conflict resolution in cooperative design systems
Communication breakdowns and boundary spanning activities on large programming projects
SIBYL: a tool for managing group decision rationale
What Productivity Increases to Expect from a CASE Environment: Results of a User Survey
A review of th state of practice in requirements modeling
Information systems failures-a survey and classification of the emperical litera- ture
If We Build It
Process Integration for CASE Environments
WebWork: METEOR's Web-based Workflow Management System
Computer systems in work design-the ETHICS method
On formalism in specifications
Technology to Manage Multiple Requirements Perspectives
Computer Related Risks
A Framework for Expressing the Relationship between Multiple Views in Requirements Specification
Using Software Technology to Define Workflow Processes
Recording the Reasons for Design Decisions
Supporting systems development by capturing deliberations during requirements engi- neering
Group Process and Conflict in Systems Development
GSU CIS Working Paper 96-15
Electronic Brokering for Assisted Contracting of Software Applets
Interactive Decision Support for Requirements Negotiation
Supporting the Negotiation Life-Cycle
Surfacing Root Requirements Interactions from Inquiry Cycle Requirements Documents
Workshop on Workflow
A Similarity Reasoning Approach
CASE Tools as Collaborative Support Technologies
Inside a software design team: Knowledge acquisition
A methodology for studying software design teams: An investigation of conflict behaviors in the requirements definition phase
Experience with the gIBIS model in a corporate setting
A Systematic Tradeoff Analysis for Conflicting Imprecise Requirements
--TR
--CTR
Steve Easterbrook , Marsha Chechik, 2nd international workshop on living with inconsistency, Proceedings of the 23rd International Conference on Software Engineering, p.749-750, May 12-19, 2001, Toronto, Ontario, Canada
Steve Easterbrook , Marsha Chechik, 2nd international workshop on living with inconsistency (IWLWI01), ACM SIGSOFT Software Engineering Notes, v.26 n.6, November 2001
Andrs Silva, Requirements, domain and specifications: a viewpoint-based approach to requirements engineering, Proceedings of the 24th International Conference on Software Engineering, May 19-25, 2002, Orlando, Florida
George Spanoudakis , Hyoseob Kim, Diagnosis of the significance of inconsistencies in object-oriented designs: a framework and its experimental evaluation, Journal of Systems and Software, v.64 n.1, p.3-22, 15 October 2002
Steve Easterbrook , Marsha Chechik, A framework for multi-valued reasoning over inconsistent viewpoints, Proceedings of the 23rd International Conference on Software Engineering, p.411-420, May 12-19, 2001, Toronto, Ontario, Canada
Licia Capra , Wolfgang Emmerich , Cecilia Mascolo, A micro-economic approach to conflict resolution in mobile computing, Proceedings of the 10th ACM SIGSOFT symposium on Foundations of software engineering, November 18-22, 2002, Charleston, South Carolina, USA
Licia Capra , Wolfgang Emmerich , Cecilia Mascolo, A micro-economic approach to conflict resolution in mobile computing, ACM SIGSOFT Software Engineering Notes, v.27 n.6, November 2002
Johan F. Hoorn , Elly A. Konijn , Hans van Vliet , Gerrit van der Veer, Requirements change: Fears dictate the must haves; desires the won't haves, Journal of Systems and Software, v.80 n.3, p.328-355, March, 2007
Javier Andrade , Juan Ares , Rafael Garca , Juan Pazos , Santiago Rodrguez , Andrs Silva, A Methodological Framework for Viewpoint-Oriented Conceptual Modeling, IEEE Transactions on Software Engineering, v.30 n.5, p.282-294, May 2004
N. Robinson , Suzanne D. Pawlowski , Vecheslav Volkov, Requirements interaction management, ACM Computing Surveys (CSUR), v.35 n.2, p.132-190, June | inconsistency and conflict management;meta-modeling;process modeling and monitoring;requirements engineering;CASE |
330365 | Finding Separator Cuts in Planar Graphs within Twice the Optimal. | A factor 2 approximation algorithm for the problem of finding a minimum-cost b-balanced cut in planar graphs is presented, for $b \leq {1 \over 3}$. We assume that the vertex weights are given in unary; for the case of binary vertex weights, a pseudoapproximation algorithm is presented. This problem is of considerable practical significance, especially in VLSI design.The natural algorithm for this problem accumulates sparsest cuts iteratively. One of our main ideas is to give a definition of sparsity, called net-sparsity, that reflects precisely the cost of the cuts accumulated by this algorithm. However, this definition is too precise: we believe it is NP-hard to compute a minimum--net-sparsity cut, even in planar graphs. The rest of our machinery is built to work with this definition and still make it computationally feasible. Toward this end, we use several ideas from the works of Rao [ Proceedings, 28th Annual IEEE Symposium on Foundations of Computer Science, 1987, pp. 225--237; Proceedings, 24th Annual ACM Symposium on Theory of Computing, 1992, pp. 229--240] and Park and Phillips [ Proceedings, 25th Annual ACM Symposium on Theory of Computing, 1993, pp. 766--775]. | Introduction
Given an undirected graph with edge costs and vertex weights, the balance of a cut is the ratio
of the weight of vertices on the smaller side to the total weight in the graph. For
cut having a balance of at least b is called a b-balanced cut; a 1
-balanced cut is given the special
name of a separator. In this paper, we present a factor 2 approximation algorithm for finding a
minimum-cost b-balanced cut in planar graphs, for b - 1
3 , assuming that vertex weights are given
in unary. We also give examples to show that our analysis is tight. For the case of binary vertex
weights, we use scaling to give a pseudo-approximation algorithm: for each ff ? 2=b, it finds a
2=ff)-balanced cut of cost within twice the cost of an optimal b-balanced cut, for b - 1=3, in
time polynomial in n and ff. The previous best approximation guarantee known for b-balanced cuts
in planar graphs was O(log n), due to Rao [8, 9]; for general graphs, no approximation algorithms
are known.
The problem of breaking a graph into "small" sized pieces by removal of a "small" set of edges or
vertices has attracted much attention since the seminal work of Lipton and Tarjan [5], because this
opens up the possibility of a divide-and-conquer strategy for the solution of several problems on
the graph. Small balanced cuts have numerous applications, see for example, [1, 3, 4, 6]. Several
Max-Planck-Institut fur Informatik, Im Stadtwald, 66123 Saarbrucken, Germany.
y Department of Computer Science & Engineering, Indian Institute of Technology, New Delhi 110016, India.
z College of Computing, Georgia Institute of Technology, Atlanta, GA 30332. Supported by NSF Grant CCR-
of these applications pertain to planar graphs, the most important one being circuit partitioning
in VLSI design.
The sparsity of a cut is defined to be the ratio of the cost of the cut and the weight on its smaller
side, and a cut having minimum sparsity in the graph is called the sparsest cut . Rao [9] gave a2 -approximation algorithm for the problem of finding a sparsest cut in planar graphs, and recently
Park and Phillips [7] showed that this problem is polynomial time solvable. A sparsest cut limits
multicommodity flow in the same way that a min-cut limits max-flow. Leighton and Rao [3] derived
an approximate max-flow min-cut theorem for uniform multicommodity flow, and in the process
gave an O(log n)-approximation algorithm for finding a sparsest cut in general graphs. By finding
and removing these cuts iteratively, one can show how to find in planar (general) graphs, a b-
balanced cut that is within an O(1) factor (O(log n) factor) of the optimal b 0 -balanced cut for
3 [8, 9, 3]. For instance, using the Park-Phillips algorithm for sparsest cut in planar
graphs, this approach gives a 1-balanced cut that is within 7.1 times the cost of the best 1-balanced
cut in planar graphs. Notice however, that these are not true approximation algorithms, since the
best 1-balanced cut may have a much higher cost than the best 1-balanced cut.
This iterative algorithm has shortcomings due to which it does not lead to a good true approximation
algorithm; these are illustrated via an example in Section 3. One of our main ideas is to
give a definition of sparsity, called net-sparsity, that overcomes these shortcomings. The notion
of net-cost, on which this definition of net-sparsity is based, reflects precisely the cost of the cuts
accumulated iteratively. Indeed, it is too precise to be directly useful computationally - we believe
that computing the sparsest cut under this definition is NP-hard even in planar graphs. The rest
of our machinery is built to work with this definition and still make it computationally feasible,
and we manage to scrape by narrowly!
Planarity is exploited in several ways: First, a cut in a planar graph corresponds to a set of cycles
in the dual. Secondly, the notion of a transfer function turns out to be very useful. Given a planar
graph with weights on faces, this notion can be used to define a function on the edges of the graph
so that on any cycle it evaluates to the sum of the weights of the faces enclosed by the cycle.
Such an idea has been used in the past by Kasteleyn [2], for computing the number of perfect
matchings in a planar graph in polynomial time. Kasteleyn defined his function over GF [2]. Park
and Phillips [7] first defined the function over reals, thereby demonstrating the full power of this
notion.
Park and Phillips [7] have shown that the problems of finding a sparsest cut and a minimum b-
balanced cut in planar graphs are weakly NP-hard, i.e., these problems are NP-hard if the vertex
weights are given in binary. Indeed, the algorithm they give for finding the sparsest cut in planar
graphs is a pseudo-polynomial time algorithm. As a consequence of this algorithm, it follows that
if P 6= NP, finding sparsest cuts in planar graphs is not strongly NP-hard. On the other hand
it is not known if the b-balanced cut problem in planar graphs is strongly NP-hard or if there
is a pseudo-polynomial time algorithm for it (the present paper only gives a pseudo-polynomial
approximation algorithm). Park and Phillips leave open the question of finding a fully polynomial
approximation scheme for sparsest cuts in planar graphs, i.e., if the vertex weights are given in
binary. We give such an algorithm using a scaling technique.
Preliminaries
E) be a connected undirected graph, with an edge cost function c
vertex weight function . Any function that we define on the elements of a universe,
extends to sets of elements in the obvious manner; the value of the function on a set is the sum
of its values on the elements in the set. Let W be the sum of weights of all vertices in G. A
partition (S; S) of V defines a cut in G; the cut consists of all edges that have one end point in S
and the other in S. A set of vertices, S, is said to be connected when the subgraph induced on it is
connected. If either S or S is connected then cut (S; S) will be called a simple cut and when both
S and S are connected then the cut (S; S) is called a bond.
Given a set of vertices S ae V , we define the cost of this set, cost(S), as the sum of the costs of all
edges in the cut (S; S). The weight of the set S, wt(S), is the sum of the weights of the vertices
included in S.
A cut (S; S) is a separator if W- wt(S); wt(S) - 2W. The cost of a separator is the sum of the
costs of the edges in the separator.
Lemma 2.1 For any connected graph G there exists a minimum-cost separator, (S; S), which is a
simple cut. Further, if S is the side that is not connected, then each connected component of S has
weight strictly less than W
3 .
Proof: Let (S; S) be a minimum-cost separator in G. Consider the connected components obtained
on removing the edges of this separator. Clearly, no component has weight strictly larger than 2W
3 .
If all components have weight strictly less than Wthen both S and S are not connected and we can
arrive at a contradiction as follows. We first pick two components that have an edge between them
and then pick the remaining, one by one, in an arbitrary order till we accumulate a weight of at
least W. The accumulated weight can not exceed 2Wsince each component has weight at most WThus we obtain a separator of cost strictly less than the cost of the separator (S; S); a contradiction.
Hence, at least one component has weight between Wand 2W. If there are two such components
then these are the only components since by switching the side of a third component we obtain a
cheaper separator. If there is only one component of weight between Wand 2Wthen this separator
is optimum iff this component forms one side of the cut and the remaining components the other
side. Thus (S; S) is a bond and if some side of the cut is not connected then all components on
that side have weight strictly less than W
3 .
Hence there always exists a minimum-cost separator that is a simple cut. Let OPT denote the set
of vertices on the side of this separator that is not connected.
3 Overview of the algorithm
Let S be a set of vertices such that wt(S) - wt(S). The sparsity of S is usually defined as the
quotient of the cost and the weight of this set, i.e.
Figure
1: Graph with vertex weights and edge costs showing how minimum sparsity increases. Here natural approach to finding good separators is to repeatedly find a set of minimum sparsity and
remove it from the graph; eventually reporting the union of the removed vertices. It is easy to
concoct "bad" examples for this approach by ensuring that the picked vertices always come from
the smaller side of the optimal separator, and thereby ensuring that the minimum sparsity available
in the remaining graph keeps increasing. This is illustrated in Figure 1; here the first cut picked
has a sparsity of 1
m whereas the last cut has a sparsity of 1
.
This approach has two shortcomings: it removes the vertices picked in each iteration and deals only
with the remaining graph in subsequent iterations, and it assumes that edges of the cuts found in
each iteration are picked permanently, even though they may not be needed in the final cut. One
of our main ideas is to give a definition of "sparsity" under which this algorithm does not suffer
from either of these shortcomings.
be two sets of vertices. Define, the net-cost of S with respect to T as,
net-cost
and the net-weight of S with respect to T as,
net-weight
Thus, if we have already picked the set of vertices T , then net-cost T (S) measures the extra cost
incurred and net-weight T (S) the weight added, in picking the set S [ T . Finally, define the net-
sparsity of S with respect to T as
net-sparsity net-cost T (S)
net-weight
For any algorithm that picks a cut by accumulating sets of vertices, the notion of net-cost gives
precisely the extra cost incurred in each iteration. But is it so precise that computing the sparsest
cut under this definition turns out to be NP-hard even in planar graphs? Although we do not have
an answer to this question, we believe that it is "yes"! Indeed, the rest of our machinery is built to
work with this definition and still make it computationally feasible, and we manage to scrape by
Let us first show that it is not sufficient to just keep picking sets of minimum net-sparsity. Consider
the following example: Suppose very sparse set of weight W\Gamma ffl, and
S 2 is a set of high sparsity and weight ffl, for a small ffl. Having picked S 1 , we might pick another
set, S 3 of sparsity almost that of S 2 , and weight W\Gamma ffl, and hence, the cost incurred would be
arbitrarily high compared to the optimum.
We get around this difficulty by ensuring that in each iteration the set of vertices we pick is such
that the total weight accumulated is strictly under W
3 . More formally: Let T i\Gamma1 be the set of
vertices picked by the end of the (i-1) th iteration (T In the i th iteration we pick a set D i
such that
[minimality] D i has minimum net-weight among all sets satisfying the above conditions.
We call set D i a dot and denote it by ffl. Thus, at the end of the i th iteration the set of vertices we
have picked is given by T . This is how we augment the "partial solution"
in each iteration.
How do we ever obtain a "complete solution" (a separator)? In the i th iteration, besides augmenting
the partial solution T i\Gamma1 to the partial solution T i we also augment it to a complete solution, i.e.
we pick a set of vertices B i such that
3 .
finding the set B 1 corresponds to finding the minimum-cost separator. To avoid this
circularity in the argument we restrict B i to a smaller class of sets:
is a bond and W- wt(T
We call the set B i a box and denote it by 2. Notice that a box set need not be a bond, and
that we count a 2 at its cost rather than its net-cost. This is done only to simplify the algorithm
and its analysis. The example which shows that the analysis of our algorithm is tight also shows
that counting the 2 at its net-cost would not have led to any improvement in the approximation
guarantee.
So, in each iteration we obtain a separator. The solution reported by the algorithm is the one of
minimum cost from among all these separators. The algorithm, which we call the dot-box algorithm
is the following.
Algorithm Dot-Box Algorithm;
1. minsol /1, i / 0, T 0 / OE
2. while wt(T
2.1.
2.2. Find ffl and 2 sets, D i and B i respectively.
If there is no ffl set, exit.
2.3. minsol / min(minsol; cost(T
2.4.
end.
We make two remarks regarding step (2.2): First, we conjecture that finding the ffl set is NP-hard.
Our procedure to find ffl sets may not always succeed; however, we will prove that if it fails, then the
set found in the current iteration gives a separator within twice OPT. Second, at some iteration
it might be the case that no subset of vertices satisfies the weight criterion for a ffl, since each set
takes the total weight accumulated to W=3 or more. In this case, the Dot-Box Algorithm halts,
and outputs the best separator found so far.
4 Analysis of the Dot-Box Algorithm
We first prove some properties of net-cost and net-weight which will be useful for the analysis.
From the definition of net-cost and net-weight we have that net-cost T net-cost
net-weight The following property also follows from the
definitions
Property 4.1 Let S 1 be two sets of vertices not necessarily disjoint. Then
net-cost net-cost net-cost T[S 1
net-weight
Property 4.2 net-cost T (S) - net-cost S"T (S).
Proof:
Figure
2 shows the edges between the sets from which the
above property is immediate. The net-cost of S with respect to S " T may be higher because it
includes the cost of edges from S \Gamma T to T \Gamma S.
The following property is immediate from Figure 3.
Figure
2: Computation of net-cost T (S) and net-cost S"T (S). A +/- on an edge signifies that the edge is
counted in the positive/negative term in the net-cost computation.
Figure
3: Computation of net-cost T net-cost T (S 1 )+net-cost T (S 2 ).
Property 4.3 Let S 1 be two disjoint sets of vertices with no edges between them. Then
net-cost net-cost net-cost T (S 2 )
Remark 4.1 For positive real numbers a; b; c; d,
min( a
d
- max( a
d
Further, let a positive real numbers. Then, such that a i
Lemma 4.1 The net-sparsity of the ffl's is increasing, i.e.
Proof: Since the set D i [ D i+1 satisfies the weight requirement for a ffl at the i th iteration,
net-sparsity T
By Property 4.1,
net-cost T net-cost T net-cost
net-weight T
which using Remark 4.1 gives us:
min(net-sparsity T
Now, by the first inequality, it must be the case
max(net-sparsity T
The lemma follows.
Let k be the first iteration at which some connected component of OPT meets the weight requirement
of a 2.
Lemma 4.2
Proof: Since OPT 6' T i\Gamma1 there are connected components of OPT which are not completely
contained in T i\Gamma1 . By assumption, none of these components satisfies the weight requirement for
a 2; hence each of these components meets the weight requirement for a ffl. Hence the ffl picked in
this iteration should have net-sparsity at most that of any of these components.
By Property 4.3, the net-cost of OPT is the sum of the net-costs of these components of OPT.
The same is true for net-weight and hence the component of OPT with minimum net-sparsity has
net-sparsity less than that of OPT. The lemma follows.
The above two lemmas imply that the net-sparsity at which the ffl's are picked is increasing and
that for any iteration before the k th , this net-sparsity is less than the net-sparsity of OPT in that
iteration.
Lemma 4.3
Proof: To establish this inequality for i we consider two processes:
1. The first process is our algorithm which picks the set of vertices D j at the j th step, 1
2. The second process picks the vertices " OPT at the j th step, 1 At the i th step it
picks the remaining vertices of OPT.
Let P j be the set of vertices picked by the second process in the first j steps. Then
OPT. At the j th step the second process picks an additional weight of
net-weight P at a cost of net-cost P By the fact that the second process picks a subset
of what the first process picks at each step we have:
4.1 For
4.2 For
4.2 we have
net-cost T net-cost P
Further,
net-weight T
and hence net-sparsity T
the claim follows since P j satisfies the weight requirement for a ffl and D j was picked as
the ffl. For and the claim follows from Lemma 4.2.
The above claims imply that in each iteration (1 through i) the first process picks vertices at a
lower net-sparsity than the second process. If both these processes were picking the same additional
weight in each iteration then this fact alone would have implied that the cost of the vertices picked
by the first process is less than the cost of the vertices picked by the second. But this is not the case.
What is true, however, is the fact that in iterations 1 through i-1 the first process picks a larger
additional weight than the second process. In the i th iteration, the second process picks enough
additional weight so that it now has accumulated a total weight strictly larger than that picked
by the first process (since wt(OPT) - W? wt(T i )). But the net-sparsity at which the second
process picks vertices in the i th iteration is more than the maximum (over iterations 1 through
i) net-sparsity at which the first process picks vertices. So it follows that the cost of the vertices
picked by the first process is strictly less than the cost of the vertices picked by the second, i.e.
Consider the separator found in the k th iteration, i.e. the cut (T This solution
is formed by picking ffl's in the first k-1 steps and a 2 in the k th step.
Lemma 4.4 cost(T
Figure
4: A tight example for our analysis. Vertex weights and edge costs are given and
Proof: Any connected component of OPT is a bond. In the k th iteration there exists a connected
component of OPT, say OPT j such that W- wt(T Hence the 2 at the k th
step should have cost at most cost(OPT j ), i.e.
From Lemma 4.3 we know that cost(T Hence
Since the Dot-Box Algorithm outputs the best separator found, we have:
Theorem 4.5 The cost of the separator found by the Dot-Box Algorithm is at most twice the
cost of OPT.
Our analysis of the Dot-Box Algorithm is tight; when run on the example in Figure 4, it
picks a separator of cost almost twice the optimum. In this example,
ffl. For ffl ? 3=n, the ffl in the first iteration is the set C and the 2 is the set A.
This separator, which is also the one returned by the Dot-Box Algorithm, has cost
hence the approximation ratio is 2
5 Structural properties of our solution, and a computationally
easier definition of net-cost
In this section we prove some structural properties of the solution found by the Dot-Box Algorithm.
This allows us to redefine net-cost in such a manner that it becomes computationally easier and
yet the analysis from the previous section continues to hold.
Lemma 5.1 For
Proof: For contradiction assume that T i is not connected. Let A be a connected component of T i .
There are three cases:
W=3 The set A satisfies the weight requirement for a ffl at the (i+1) th iteration.
Since A has edges only to vertices in T i , the net-cost of A with respect to T i is negative.
Hence net-sparsity T i (A) is negative, contradicting Lemma 4.1.
The cut is a separator of cost, cost(T i [ contradiction
the condition of this case implies that wt(A) ? W=3. If wt(A) - 2W=3
then once again we have a contradiction since now the cut (A; A) is a separator of cost,
cost(OPT). Thus it must be the case that wt(A) ? 2W=3.
Since the above argument implies that each connected component of T i has weight greater than
must have only one connected component.
Lemma 5.2 For
Proof: For contradiction assume that T is not connected. Let A be a connected component
of be the rest of T also denote by A; B the corresponding sets of
vertices).
If D i is the ffl at the i th iteration then net-cost T net-cost T are
disjoint set of vertices with no edges between them, by Property 4.3
net-cost T net-cost net-cost
Further,
net-weight T
Thus either it is the case that one of A; B has smaller net-sparsity than D i which contradicts the
assumption that D i is a ffl or else, both A; B have the same net-sparsity as D i but this contradicts
the minimality requirement on D i .
Lemma 5.3 For every iteration there exists a ffl, D i , satisfying
1. (D
2. Each connected component of T i\Gamma1 is contained in D i or D i and there is no edge between D i
and components of T i\Gamma1 in D i .
Proof: The set T together with any subset of T i\Gamma1 is also a ffl for the i th iteration. We form
a new ffl, D i , by merging with connected components of T which have an edge to
connected, so is D i . Further, since the graph is connected, every
remaining component of T i\Gamma1 has an edge to T i so that D i is also connected. Thus (D
bond. It follows from the definition of D i that there is no edge between D i and components of T
in D i .
Since every ffl in iterations 1 through the conditions in Lemma 5.3, we can restrict our
search for the ffl at the i th iteration to sets that satisfy these conditions as additional requirements.
(D
Each connected component of T i\Gamma1 is contained in D i or D i and there is no edge between
D i and components of T i\Gamma1 in D i .
be the graph obtained by shrinking each connected component of T i\Gamma1 into a
single vertex, removing the self-loops formed and replacing each set of parallel edges by one edge
of cost equal to the sum of the cost of the edges in the set. For finding a ffl at the i th iteration
we consider only such sets, S, such that no connected component of T i\Gamma1 is split across (S; S) and
(S; S) is a bond. Therefore we need to look only at subsets of V i that correspond to bonds in G i .
Let S be a subset of vertices in G i . The trapped cost of S with respect to T i\Gamma1 , denoted by
trapped-cost T i\Gamma1 (S), is the sum of the costs of the components of T i\Gamma1 that are contained in S. We
now redefine the net-cost of S with respect to T i\Gamma1 as
net-cost T trapped-cost T
Note that for any subset of vertices in G i , the net-cost under this new definition is at least as
large as that under the previous definition. However, and this is crucial, the net-cost of the ffl set
remains unchanged. This is so because by Lemma 5.3 there are no edges between D i and the
components of T i\Gamma1 not in D i . Therefore, a ffl under this new definition of net-cost will also be a ffl
under the previous definition, and so our analysis of the Dot-Box Algorithm continues to hold.
6 Onto planar graphs
We do not know the complexity of computing ffl sets; we suspect that it is NP-hard even in planar
graphs. Yet, we can implement the Dot-Box Algorithm for planar graphs - using properties
of cuts in planar graphs, and by showing that if in any iteration, the algorithm does not find the ffl
set, then in fact, the separator found (using the 2 set found in this iteration) is within twice the
optimal (this is proven in Theorem 7.1).
6.1 Associating cycles with sets
Let G D be the planar dual of G and we fix an embedding of G D .
Proposition 6.1 There is a one-to-one correspondence between bonds in G and simple cycles in
G D .
Proof: Let (S; S) be a bond in G. Since S is connected, the faces corresponding to S in G D are
adjacent and so the edges of G D corresponding to (S; S) form a simple cycle.
For the converse, let C be a simple cycle in G D which corresponds to the cut (S; S) in G. Let u; v
be two vertices in G that are on the same side of the cut (S; S). To prove that (S; S) is a bond it
suffices to show a path between u and v in G that does not use any edge of (S; S).
Embed G and G D in R \Theta R, and consider the two faces of G D corresponding to vertices u and v.
Pick an arbitrary point in each face, for instance, the points corresponding to u and v. Since C is a
simple cycle in G D (and hence in R \Theta R) there is a continuous curve (in R \Theta R) that connects the
two points without intersecting C. By considering the faces of G D that this curve visits, and the
edges of G D that the curve intersects, we obtain a path in G that connects vertices u; v without
using any edge of (S; S).
Since for finding a ffl and 2 we only need to consider sets, S, such that (S; S) is a bond, we can
restrict ourselves to simple cycles in G D . Furthermore, the two orientations of a simple cycle can be
used to distinguish between the two sides of the cut that this cycle corresponds to. The notation we
adopt is: with a cycle C directed clockwise we associate the set of faces in G D (and hence vertices
in G) enclosed by C (the side that does not include the infinite face is said to be enclosed by C
and the side containing the infinite face is said to be outside C).
Let ~
G D be the graph obtained from G D by replacing each undirected edge (u; v) by two directed
edges u). By the preceding discussion, there exists a correspondence between
sets of vertices, S, in G such that (S; S) is a bond and directed simple cycles in ~
G D .
6.2 Transfer function
We associate a cost function, c, with the edges of ~
G D in the obvious manner; an edge in ~
G D is
assigned the same cost as the corresponding dual edge in G. Thus, for a directed cycle, C, c(C),
denotes the sum of the costs of the edges along the cycle. We would also like to associate functions,
with the edges of ~
G D so that if S is the set corresponding to a directed simple cycle C,
trapped-cost T We achieve this by means of a
transfer function.
The notion of a transfer function was introduced by Park and Phillips [7], and can be viewed as
an extension of a function given by Kasteleyn [2]. A function g defined on the edges of ~
G D is
u). (Notice that the function c defined above is symmetric.)
R be a function on the vertices of G. The transfer function corresponding to f is an
anti-symmetric function, f t , on the edges of ~
G D such that the sum of the values that f t takes on
the edges of any clockwise (anticlockwise) simple cycle in ~
G D , is equal to the (negative of the) sum
of the values that f takes on the vertices corresponding to the faces enclosed by this cycle.
That a transfer function exists for every function defined on the vertices of G and that it can be
computed efficiently follows from the following simple argument. Pick a spanning tree in G D , and
set f t to zero for the corresponding edges in ~
G D . Now, add the remaining edges of G D in an order
so that with each edge added, one face of this graph is completed. Note that before the edge e is
added, all other edges of the face that e completes have been assigned a value under f t . One of the
two directed edges corresponding to e is used in the clockwise traversal of this face, and the other
in the anti-clockwise traversal. Since the value of f for this face is known and since f t should sum
to this value (the negative of this value) in a clockwise (anti-clockwise) traversal of this face, the
value of f t for the two directed edges corresponding to e can be determined. Note that the function
obtained in this manner is anti-symmetric and this together with the fact that the edges of any
simple cycle in G D can be written as a GF [2] sum of the edges belonging to the faces contained in
the cycle implies that f t has the desired property.
7 Finding ffl sets
Recall that a ffl at the i th iteration is a bond in the graph G hence we can restrict
our search for a ffl at the i th iteration to directed simple cycles in ~
G D
7.1 Obtaining net-weight and net-cost from transfer functions
Let ~
be two functions defined on the vertices of G i as follows. The
vertices in V i obtained by shrinking connected components of T i\Gamma1 have ~
equal to the
cost of the corresponding component of T i\Gamma1 . The remaining vertices have ~
denote the transfer functions corresponding to functions ~
We now relate the values of
the functions c; on a directed simple cycle to the net-cost, net-weight and net-sparsity of the
set corresponding to the cycle.
Let C be a directed simple cycle in ~
G D
i and S ae V the set corresponding to it. If C is clockwise
then the net-weight and trapped-cost of S are given by the values of the transfer functions on C,
i.e.
net-weight T
trapped-cost T
If C is anti-clockwise then the values of the transfer functions w equal the negative of the
net-weight and the trapped-cost of the set enclosed by C (which is S in our notation). Hence
net-weight T
trapped-cost T trapped-cost T
Recalling our new definition of net-cost
net-cost T trapped-cost T
We conclude that if C is clockwise
net-sparsity T
and for anti-clockwise C,
net-sparsity T
Hence, for a simple directed cycle C, once we know the values of the transfer functions w
is easy to determine the net-weight and net-sparsity of the corresponding set S. Note that the
orientation of C can be determined by the sign of w i (C) since w implies that
C is clockwise (anti-clockwise).
7.2 The approach to finding a ffl
For a fixed value of w i (C), net-sparsity T i\Gamma1 (S) is minimized when
suggests the following approach for finding a ffl:
For each w in the range (0 - w - W
compute min-cycle(w): a directed simple cycle with minimum
directed cycles C with w Find the net-sparsity of the set corresponding to
each of these cycles. The set with the minimum net-sparsity is the ffl for this iteration.
However, we can implement only a weaker version of procedure min-cycle. Following [7], we
construct a graph H i whose vertices are 2-tuples of the kind (v; is a vertex in ~
G D
j is an integer between \GammanW and nW . For an edge
G D
i we have, for all possible
choices of j, edge (u; (e). The shortest path between (v;
and (v; w) in H i gives the shortest cycle among all directed cycles in ~
G D
which contain v and for
which w i By doing this computation for all choices of v, we can find the shortest cycle
with
Two questions arise:
1. Is negative cycles? This is essential for computing shortest paths efficiently.
2. Is the cycle obtained in ~
G D
The answer to both questions is "no". Interestingly enough, things still work out. We will first
tackle the second question (in Theorem 7.1), and then the first (in Lemma 7.2).
7.3 Overcoming non-simple cycles
Before we discuss how to get over this problem, we need to have a better understanding of the
structure of a non-simple cycle, C. If C is not a simple cycle in ~
G D
arbitrarily into
a collection of edge-disjoint directed simple cycles, C. Let (S be the cut (in G i ) corresponding
to a cycle C be the side of the cut that has smaller net-weight. Further, let S be
the collection of sets S j , one for each C j 2 C.
trapped-cost T
\Gammatrapped-cost T
trapped-cost T
trapped-cost T
Figure
5: Relationship between net-weight T
(S) and w i (C) for the four cases.
The value of the transfer functions w is the sum of their values over the cycles C j in the
collection C. Also,
For each cycle C we need to relate the net-weight, trapped-cost of S j to the value of the
transfer functions w might either be clockwise or anti-clockwise. Further S j might
either be inside C j or outside C j . This gives us a total of four different cases. The relationship
between net-weight T trapped-cost T Figure 5
and can be captured succinctly as follows
trapped-cost T
\Gamma1g.
Hence we get a decomposition rule relating the value of the functions w on a non-simple cycle
C to the net-weight and trapped-cost of the sets induced by this cycle.
trapped-cost T
is an integer.
7.4 A key theorem
Let D i be a ffl at the i th iteration (i - the directed simple cycle in ~
G D
corresponding
to it. Further, let C be the directed cycle reported by min-cycle(w i (C )).
Theorem 7.1 If C is not simple then the separator found in this iteration has cost at most 2 \Delta
cost(OPT), i.e.
Proof: Since C is the directed cycle for which among all cycles with
claim the following.
If C is clockwise, i.e. w i (C
net-cost T
and if C is anti-clockwise, i.e. w i (C
net-cost T
Substituting for w i (C) and t i (C) by the decomposition rule we get
\Gammaz \Delta cost(T
trapped-cost T net-cost T
where z is x if C is clockwise and x
We now prove that there exists S j 2 S which meets the weight requirement for a 2 and has cost
no more than the cost of T i , i.e,
1. W- wt(T net-cost T
Assume for contradiction that no such exists. The following observations about the cost/net-cost
of a set S j 2 S are immediate.
Observation 7.1 if net-weight T
net-cost T
which implies
net-cost T net-cost T
Observation 7.2 if net-weight T
net-sparsity T
and hence
net-cost T
From the above two observations it follows that
Observation 7.3 All sets S j 2 S have non-negative net-cost, i.e. net-cost T
The idea behind obtaining a contradiction is as follows. For every integral choice of z we use
equation 1 to provide a lower bound on the total net-weight of the sets S j 2 S and equation 2 to
provide an upper bound on the total net-cost of the sets S j 2 S. We then use the above observations
on the cost/net-cost of sets S j 2 S to argue that there is no way of having sets with so large a total
net-weight at so little a total net-cost.
We shall consider 3 cases depending upon whether z is positive/negative/zero.
Equation 2 implies
net-cost T
trapped-cost T
net-cost T
and from equation 1 we have
net-weight T
net-weight T
Since the net-cost of each set is non-negative (Observation 7.3) each set in S has net-cost no
more than net-cost T (D i ). This in turn implies that every set in S has net-weight strictly
less than W=3 \Gamma wt(T Thus every set in S meets the weight requirement
for a ffl. Since the net-cost of every set in S is non-negative, Remark 4.1 applied to the above
two inequalities implies that either there exists S j 2 S of lower net-sparsity than D i or that
every set in S has the same net-sparsity as D i and that the sum of the net-weight of the sets
in S is equal to the net-weight of D i . The first setting leads to a contradiction since every
set in S satisfies the weight requirement for a ffl and D i is the ffl at this iteration. The second
setting in turn contradicts the minimality requirement on D i .
denote the collection of sets S j 2 S with y now yields
net-cost T
trapped-cost T
trapped-cost T
where the second inequality follows from the fact that all sets in S non-negative
net-cost. We shall develop a contradiction by showing that the costs of the sets in S \Gamma is more
than the left hand side of the above inequality. A lower bound on the total net-weight of the
sets in S \Gamma can be obtained from equation 1 as follows
net-weight T
What is the cheapest way of picking sets so that their net-weight is at least z(W \Gamma wt(T
net-weight T 7.2 a set S j such that net-weight T
can be picked only at a net-sparsity of at least net-sparsity T On the other
hand Observation 7.1 says that we could be picking sets with large net-weight for cost little
more than cost(T net-cost T any set in S has net-weight at most
cost of picking these large sets could be as small as
net-cost T
net-cost T
net-cost T
net-weight T
net-sparsity T
where the last inequality follows from the fact that sparsity(T
which in turn is a consequence of Lemma 4.1.
Thus the cheapest possible way of picking sets is to pick sets of net-weight W \Gammawt(T
incurring a cost of little more than cost(T net-cost T for each set picked. Since we
need to pick a net-weight of at least z(W \Gamma wt(T would have to
pick at least 2z \Gamma 1 such sets and so the cost incurred is at least
net-cost T
net-cost T
where the last inequality follows from the fact that z - 1 (hence
This however contradicts the upper bound on the sum of the costs of the sets in S \Gamma , that we
derived at the beginning of this case.
denote the collection of sets S j 2 S with y Equation 2 now yields
net-cost T
trapped-cost T
net-cost T
The total net-weight of the sets in S + can be bounded using equation 1 as follows.
net-weight T
net-weight T
where the last inequality follows from the fact that the net-weight of D i is non-negative.
What is the cheapest way of picking sets so that their net-weight is at least \Gammaz(W \Gamma wt(T
Once again by Observation 7.2 a set S j of net-weight less than W
can be picked
only at a net-sparsity of at least net-sparsity T On the other hand Observation 7.1
says that we could be picking a set of net-weight as large as (W \Gamma wt(T i\Gamma1 ))=2 for a net-cost
that is only strictly larger than net-cost T
net-cost T
net-cost T
net-weight T
net-sparsity T
the cheapest possible way of picking sets is to pick sets of net-weight W \Gammawt(T
2 and incur
a net-cost strictly larger than net-cost T for each set picked. Since we need to pick a
net-weight of at least \Gammaz(W \Gamma wt(T i\Gamma1 )), we should pick at least \Gamma2z such sets. Since z - \Gamma1,
the total net-cost of these sets is strictly larger than net-cost T contradicting the upper
bound derived at the beginning of this case.
We have thus established that there exists a set S j 2 S which meets the weight requirement for a
2 and has cost no more than cost(T i ). Further, S j corresponds to a directed simple cycle in ~
G D
Our procedure for finding a 2 returns a set of cost less than the cost of any set that meets the
weight requirement for a 2 and corresponds to a directed simple cycle in ~
G D . Hence
Therefore
where the last inequality follows from Lemma 4.3 and the fact that i - k \Gamma 1.
For each w in the range [0:: W
suffices to find in G i a
directed cycle (not necessarily simple) with minimum among all directed cycles C
with some w the shortest cycle is not simple we discard the cycle and do not
consider that w for the purpose of computing the ffl. If in the process we discard the cycle with
then by the above theorem the separator found in this iteration is within twice
the optimum. Else, we obtain a simple cycle C with w and the set corresponding to
this cycle is a ffl.
Finally we have to deal with the case that there are negative cycles in H i . A negative cycle in H i
corresponds to a cycle C in ~
G D
i such that w i
Lemma 7.2 If C is a cycle in ~
G D
i such that w i then the separator
found in this iteration has cost at most 2 \Delta cost(OPT).
Proof: The proof of this lemma is along the lines of Theorem 7.1. We decompose C into a collection
C of directed simple cycles. For C be the side of the cycle with smaller net-weight and
let S be the collection of sets S j , one for each C j 2 C. Using the decomposition rule we have
\Gammaz \Delta cost(T
trapped-cost T
For contradiction we assume that every S j 2 S which satisfies the weight requirement for a 2 has
cost more than cost(T i ). By Observation 7.3 every set S j 2 S has non-negative net-cost. Hence
equation 4 yields
z
trapped-cost T
trapped-cost T
which implies that z ? 0.
A lower bound on the total net-weight of sets in S \Gamma can be obtained using equation 3.
net-weight T
Beyond this point the argument is almost identical to that for the case when z ? 0 in the proof
of Theorem 7.1. This contradicts our assumption that every set S j 2 S which meets the weight
requirement of a 2 has cost more than cost(T i ). As in the proof of Theorem 7.1, the 2 picked
in this iteration has cost at most cost(T i ) and hence the cost of the separator output is at most
By Lemma 7.2, we need to compute shortest paths in graph H i only if it has no negative cycles.
Finding 2 sets
We will use Rao's algorithm [8, 9] to find a 2 set. Let be a weight function on
the vertices of G such that in the i th
iteration, then (B b-balanced bond in G when the weights on the vertices are given by w
to find the 2 we need to find the minimum-cost simple cycle in G D
which corresponds to a b-balanced bond in G.
Rao [8, 9] gives an algorithm for finding a minimum cost b-balanced connected circuit in G D . A
connected circuit in G D is a set of cycles in G D connected by an acyclic set of paths. Intuitively,
a connected circuit can be viewed as a simple cycle with 'pinched' portions corresponding to the
paths. The cost of a connected circuit is defined to be the cost of the closed walk that goes through
each pinched portion twice and each cycle once. A connected circuit in G D defines a simple cut in
G; the vertices corresponding to faces included in the cycles of the connected circuit form one side
of the cut. A connected-circuit is b-balanced if the cut corresponding to it is b-balanced. Note that
the cost of the cut defined by a connected circuit is just the sum of the costs of the cycles in it.
Hence, the definition of cost of a connected circuit is an upper bound on the cost of the underlying
cut; the two are equal if the connected circuit is a simple cycle.
Notice that for a 2 we do not really need to find a minimum-cost b-balanced bond in G; any cut
that is b-balanced and has cost no more than the minimum-cost b-balanced bond will serve our
purpose. Hence we can use Rao's algorithm to find a 2. The total time taken by Rao's algorithm
to obtain an optimal b-balanced connected circuit cut is O(n 2 W ).
9 Running time
Clearly, the algorithm terminates in at most n iterations. The running time of each iteration
is dominated by the time to find a ffl. In each iteration, computing a ffl involves O(n) single
source shortest path computations in a graph with O(n 2 W ) vertices and O(n 2 W ) edges; the edge-
lengths may be negative. This requires O(n 4 W 2 log nW using Johnson's extension of the
all-pairs shortest path algorithm of Floyd and Warshall. Hence, the total running time of the
Dot-Box Algorithm is O(n 5 W 2 log nW ). This is polynomial if W is polynomially bounded.
Theorem 9.1 The Dot-Box Algorithm finds an edge-separator in a planar graph of cost within
twice the optimum, and runs in time O(n 5 W 2 log nW ), where W is the sum of weights of the
vertices.
Dealing with binary weights
The size of the graph in which we compute shortest paths (and hence the running time of the
Dot-Box Algorithm) depends on the sum of the vertex weights. Using scaling we can make
our algorithm strongly polynomial; however, the resulting algorithm is a pseudo-approximation
algorithm in the sense that it compares the cut obtained with an optimal cut having a better
balance. Finally, we use our scaling ideas to extend the algorithm of Park and Phillips into a fully
polynomial approximation scheme for finding sparsest cuts in planar graphs with vertex weights
given in binary, thereby settling their open problem.
10.1 b-balanced cut
Let us scale the vertex weights so that the sum of the weights is no more than ffn (ff ? 1). This
can be done by defining a new weight function -
The process of obtaining the new weights can be viewed as a two step process: first we scale the
weights by a constant factor ffn
W and then truncate. The first step does not affect the balance of any
cut since all vertex weights are scaled by the same factor. However, the second step could affect
the balance of a cut. Thus a cut (S; S) with balance b under the weight function wt might have a
worse balance under -
wt since all vertices on the side with smaller weight might have their weights
truncated. However, the total loss in weight due to truncations is at most n (1 for each vertex).
The balance would be worst when the total weight stays at ffn (not drop by the truncations) and
then the loss of weight of the smaller side is a 1=ff fraction of the total weight. Thus the balance
of the cut (S; S) under -
wt might be b \Gamma 1=ff but no worse.
Similarly a cut (S; S) with balance - b under -
wt might have a worse balance under wt. It is easy to
show by similar a argument that under wt, (S; S) has a balance no worse than -
Let OPT denote the cost of the optimum b-balanced cut under the weight assignment wt. Since this
cut might be (b \Gamma 1=ff)-balanced under -
wt, we use the Dot-Box Algorithm to find a (b \Gamma 1=ff)-
balanced cut of cost within 2OPT. The cut returned by our algorithm, while being (b \Gamma 1=ff)-
balanced under -
wt, might only be (b \Gamma 2=ff)-balanced under wt. Thus we obtain a (b \Gamma 2=ff)-balanced
cut of cost within twice the optimum b-balanced cut.
Theorem 10.1 For ff ? 2=b, the Dot-Box Algorithm, with weight scaling, finds a (b \Gamma 2=ff)-
balanced cut in a planar graph of cost within twice the cost of an optimum b-balanced cut for b - 1
in O(ff 2 n 7 log nff) time.
10.2 Sparsest cut
Assume that vertex weights in planar graph G are given in binary. Let 2 p be the least power of
2 that bounds the weight of each vertex and let W be the sum of weights of all vertices. We
will construct each having the same edge costs as G. In G i ,
vertex weights are assigned as follows: Let ff be a positive integer; ff determines the approximation
guarantee as described below. Vertices having weights in the range [2 are assigned
their original weight, those having weight are assigned weight 0, i.e., they can be deleted from
the graph, and those having weight ? 2 i+2 log n+ff+2 are assigned weight 2 i+2 log n+ff+2 . The sparsest
cut is computed in each of these graphs using the algorithm of Park and Phillips. For the purpose
of this computation, the weights of all vertices in G i are divided by 2 notice that this leaves the
weights integral, and the total weight of vertices is at most O(2 ff n 3 ). The running time of [7] is
O(n 2 w log nw), where w is the total weight of vertices in the graph. So, this computation takes
time O(ff2 ff n 5 log W log n), which is polynomial in the size of the input, for fixed ff. The sparsity
of the cuts so obtained is computed in the original graph, and the sparsest one is chosen.
Let (S; S) be an optimal sparsest cut in G, and let S be its lighter side. Let the weight of S
be t, and let q be the weight of the heaviest vertex in S. Pick the smallest integer i such that
Lemma 10.2 The cut found in G i has cost at most that of (S; S), and weight at least
Therefore this cut has sparsity within a factor of 1
of the sparsity of (S; S).
Proof: The algorithm of Park and Phillips searches for the cheapest cut of each weight, for all
choices of weight between 0 and half the weight of the given graph. It then outputs the sparsest of
these cuts.
First notice that for the choice of i given above, the weight of S in G i , say t
t. Indeed, any set of vertices whose weights are at most 2 i+2 log n+ff+2 in G satisfies that its
weight drops by a factor of at most
. On the other hand, any set of vertices containing
a vertex having weight ? 2 i+2 log n+ff+2 in G has weight exceeding t in G i . Therefore, the cut found
in G i for the target weight of t 0 satisfies the conditions of the lemma.
For a given choice of ffi ? 0, pick the smallest positive integer ff so that 1
. Then, we
get the following:
Theorem 10.3 The above algorithm gives a fully polynomial time approximation scheme for the
minimum-sparsity cut problem in planar graphs. For each ffi ? 0, this algorithm finds a cut of
sparsity within a factor of (1 + ffi) of the optimal in O( 1
log n) time.
Open problems
Several open problems remain:
1. Is the problem of finding the cheapest b-balanced cut in planar graphs strongly NP-hard, or
is there a pseudo-polynomial time algorithm for it?
2. What is the complexity of finding a minimum net-sparsity cut in planar graphs, assuming
that the vertex weights are given in unary?
3. What is the complexity of finding ffl sets in planar graphs, assuming that the vertex weights
are given in unary?
4. Can the algorithm given in this paper be extended to other submodular functions?
5. Can it be extended to other classes of graphs? In particular, can the notion of transfer
function be extended to other classes of graphs?
--R
A framework for solving vlsi graph layout problems.
Dimer statistics and phase transitions.
An approximate max-flow min-cut theorem for uniform multicommodity flow problems with application to approximation algorithms
A separator theorem for planar graphs.
Applications of a planar separator theorem.
Finding minimum-quotient cuts in planar graphs
Finding near optimal separators in planar graphs.
Faster algorithms for finding small edge cuts in planar graphs.
--TR
--CTR
Eyal Amir , Robert Krauthgamer , Satish Rao, Constant factor approximation of vertex-cuts in planar graphs, Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, June 09-11, 2003, San Diego, CA, USA | approximation algorithms;planar graphs;separators |
330368 | Achilles, Turtle, and Undecidable Boundedness Problems for Small DATALOG Programs. | DATALOG is the language of logic programs without function symbols. It is considered to be the paradigmatic database query language. If it is possible to eliminate recursion from a DATALOG program then it is bounded. Since bounded programs can be executed in parallel constant time, the possibility of automatized boundedness detecting is believed to be an important issue and has been studied in many papers. Boundedness was proved to be undecidable for different kinds of semantical assumptions and syntactical restrictions. Many different proof techniques were used. In this paper we propose a uniform proof method based on the discovery of, as we call it, the Achilles--Turtle machine, and make strong improvements on most of the known undecidability results. In particular we solve the famous open problem of Kanellakis showing that uniform boundedness is undecidable for single rule programs (called also sirups).This paper is the full version of [J. Marcinkowski, Proc. 13th STACS, Lecture Notes in Computer Science 1046, pp. 427--438], and [J. Marcinkowski, 11th IEEE Symposium on Logic in Computer Science, pp. 13--24]. | Introduction
.
1.1.
Introduction
. The query relation R, that answers, for a given directed
graph (database), if that is possible, for given two nodes, to reach one of them from
the other in an odd number of steps, is not a first order one. That is because of lack
of recursion in the first order logic. This observation led to the study of DATALOG
(DATAbase LOGic) programs which combine existential positive first order logic with
recursion. For example the relation R can be defined by an "odd-distance" DATALOG
program:
(i) R(X;Y ):- E(X;Y
where E is the edge relation of the graph. E is so called extensional predicate
we treat it as an input and are not able to prove new facts about it. R is the
output, or intensional predicate (IDB). The program proves facts about it. The first
rule is an initialization rule: it has only the extensional predicate in the body. But
the second rule contains the intensional predicate among its premises, so it can be
used recursively and deep proofs can be constructed. It is clear that if in some graph
there is a path from an element A to B of an odd length n then to prove R(A; B) for
Supported by the KBN grant 8T11C02913
y jma@tcs.uni.wroc.pl, Institute of Computer Science, Wroc/law University, ul Przesmyckiego 20,
J. MARCINKOWSKI
such elements a proof of a depth about log n may be needed. So in huge databases
arbitrarily deep proofs are necessary to evaluate the program.
On the other hand, consider a program which computes the "has 3-tail" query:
(iii) 1TAIL(Z):- E(Z; X).
(v) 3TAIL(Z):- 2TAIL(Y ); E(Z; Y ).
If 3TAIL(A) is provable for some A then there exists a proof of the fact which
is not deeper than 3, regardless of the number of elements in the database. 1TAIL,
2TAIL and 3TAIL are IDB predicates and the second and third rules are recursive.
But in fact, the recursion can be eliminated at all from the last program. It is possible
to write an equivalent one where only proofs of deep 1 will be necessary:
The recursion can be eliminated from a given program, and the program is equivalent
to a first order query if and only if there is an a priori upper bound on the
depth of the proofs needed to evaluate queries, and so every fact that can be derived
by the program, can be derived in constant time (in parallel, with polynomially many
processors) independent of the size of the database (this equivalence was proved in
[3], the "if" direction is nontrivial). Such programs are called bounded.
1.2. Previous works and our contribution. The problem of distinction
whether a given DATALOG program is bounded or not, is important for DATALOG
queries optimization, but is, in general, undecidable. Sufficient conditions for boundedness
were given in [17], [10], [18] and [19]. The decidability-undecidability border,
for cases of different syntactical restrictions and semantical assumptions has been
studied in [20], [5], [2], [6], [8], [9], [24], [23].
The syntactical restrictions considered were: number of rules or of recursive rules
in the program, maximal arity of the IDB symbols and linearity of rules.
The semantical assumptions concern the status of the IDB relations before the
execution of the program. If they are empty, then we deal with weak (program)
boundedness. While arbitrary relations must be considered as possible IDB inputs
then strong (uniform) boundedness is studied.
Undecidability of uniform boundedness implies undecidability of program boundedness
for fixed syntactical restrictions (with possibly some additional initialization
rules, see Section 1.7 for a discussion). The survey of previously known results ((i)-(v)
below) illustrates the difference in the level of difficulty of undecidability proofs, for
uniform and program boundedness.
Decidability has been proved for monadic programs program boundedness,
(so also for the uniform) [6], [5] and for typed single rule programs [20]. It is also
known that the program (and uniform) boundedness is decidable for programs with
single linear recursive rule if the IDB predicate is binary [24]. Moreover, program
boundedness is decidable for binary programs if each IDB predicate is defined by only
one recursive rule [23].
Undecidability has been proved for
(i) program boundedness of linear binary programs [9].
ACHILLES, TURTLE, AND SMALL DATALOG PROGRAMS 3
(ii) program boundedness of programs with one recursive rule and two initializations
[2],
(iii) Program boundedness of programs consisting of two linear recursive rules
and one initialization [9],
(iv) uniform boundedness of ternary programs [9],
(v) uniform boundedness of arity 5 linear programs [8]
Decidability of the uniform boundedness for programs consisting of only one rule
was stated as an open problem in [11], where NP-hardness of the problem was proved
and then in [2] and [12]. No undecidability results for uniform boundedness of programs
with small number of rules have been proved since then.
In this paper we give strong improvements of the results (ii)-(v) showing that:
(vi) uniform boundedness is undecidable for ternary linear programs (Section 3.1).
This improves the results (iv) and (v).
(vii) uniform boundedness is undecidable for single recursive rule ternary programs
(Section 3.3). This improves (iv).
The additional improvement is, that our program is syntactically simpler: the
recursive rule is quasi-linear, which means that, generally speaking, it has a form:
where I and J are intensional predicates. Since it is the only recursive rule, the
proof from the program is a tree with only one (possibly) long branch.
Notice that in (vi) and (vii) we still allow a number of initializations so the results
hold also for program boundedness.
(viii) uniform and program boundedness are undecidable for programs consisting
of one linear recursive rule and one initialization (Section 4.3).
Since program boundedness is clearly decidable for programs consisting of one
rule the result (viii) closes the number/linearity of rules classification for program
boundedness. It is a strong improvement of (ii) and (iii).
Finally, in Section 4.5 we solve the problem of Kanellakis showing that:
(ix) uniform boundedness of single rule programs is undecidable.
1.3. The Method. While different techniques were used in the proofs of the
results (i)-(v) (reduction to the halting and mortality problems of a Turing Machine,
reduction from the halting problem of a two counters machine, syntactical reduction
of an arbitrary DATALOG program to a single recursive rule program), we develop
for all our results a universal method, based on an encoding of Conway functions. We
have learned about Conway functions from the paper of Philippe Devienne, Patrick
Leb'egue and Jean-Christophe Routier [7] , who used them to prove undecidability of
the, so called, "cycle unification". We feel that our paper would not have been written
without their previous work. Our encoding is nevertheless quite different from the
one in [7]: the first difference is that a language with functions was used there.
We construct, as we call it, Achilles-Turtle machine, a variant of Turing machine.
Next, we use a version of the Conway theorem to prove that what we constructed is
really a universal machine. Then we encode the Achilles-Turtle machine with DATALOG
programs. Due to particular simplicity of Achilles-Turtle machine (one is really
tempted to claim that it is the simplest known universal machine) it is possible to
encode it with syntactically very small DATALOG programs. We believe that this is
not the last time that Achilles-Turtle machine is used in undecidability proofs.
4 J. MARCINKOWSKI
We combine the Conway functions method with the technique of using a binary
EDB relation as an order: if there is a chain long enough in the relation then we can
think that it represents a tape of the machine. If there is no such chain then proofs
are not too long. This method goes back to [9] and [8].
1.4. Open Problems. While the classification is finished for program boundedness
the following syntactical restrictions still give interesting open problems concerning
decidability of uniform boundedness:
(i) binary programs,
(ii) linear binary programs,
(iii) programs consisting of a single linear rule.
We do not know any example of syntactical restrictions for which uniform boundedness
would be decidable and program boundedness not. It seems that the most
likely candidate for the example is the class of linear binary programs. Program
boundedness is known to be undecidable for the class.
1.5. Preliminaries. A DATALOG program is a finite set of Horn clauses (called
rules) in the language of first order logic without equality and without functions.
The predicates, used in the program, but only in the bodies of the rules, are called
extensional predicates or EDB. A predicate which occurs in a head of some rule is
called intensional or IDB. A rule is called recursive if an IDB predicate occurs in its
body. A rule which is not recursive is an initialization rule. A recursive rule is linear
if it has only one occurrence of IDB in the body. A program is linear if each of its
recursive clauses is linear. Arity of a DATALOG program is the highest arity of the
IDB predicates used.
So, for example, in the two programs of Section 1.1 the predicate E is extensional,
and all the other predicates are intensional. The rules (i) and (iii) are initializations.
Rules (ii), (iv) and (v) are recursive. Rules (iv) and (v) are linear and so the "has
3 tail" program is linear. It is also monadic, while the "odd-distance" program is
binary.
A database is a finite set of ground atomic formulas. A derivation (or a proof) of
a ground atomic formula A, from the program P and the database D, is a finite tree
such that: (i) each of its nodes is labeled with a ground atomic formula, (ii) each leaf
is labeled with an atom from D, (iii) for each non-leaf node there exists a rule R in
the program P and a substitution oe, such that oe of the head of R is the label of the
node and the substitutions of the body of R are the labels of its children, (iv) A is
the label of the root of the tree. The depth of the proof is the depth of the derivation
tree.
Instead of writing proof from the program P in the database D we use the expression
simply proof if the context is clear.
Notice that if P is a linear program then a P-proof is a sequence of ground atomic
formulas. In such case we use the word length for the depth of the proof.
In general, a program P is bounded if for every database D, if an atom can be
proved from P and D then it has a proof not deeper than a fixed constant c.
Different conventions concerning the input and output of a DATALOG program
correspond to different definitions of boundedness: predicate, program and uniform
boundedness are studied. A program is predicate bounded , with respect to a fixed
predicate PRE, if there is a constant c such that for every database D, such that there
are no facts about IDB predicates in D, and for every ground atom
if the atom has a proof from P and D then it has a proof not deeper than c. This
definition reflects the situation when the EDB predicates are the input and only one
ACHILLES, TURTLE, AND SMALL DATALOG PROGRAMS 5
predicate is the output of the program. A program P is program bounded if it is
predicate bounded for all IDB predicates.
A program is uniformly bounded if there is a constant c such that for every
database D, (here we do not suppose that the IDB predicates do not occur in D)
and for every ground atom A if the atom has a proof from P and D then it has a
proof not deeper then c. Here all the predicates are viewed as the input and as the
output of a program.
1.6. Example: Program Boundedness vs. Uniform Boundedness. To
make the difference between program boundedness and uniform boundedness clear
for the reader we give an example of a program which is bounded but not uniformly
bounded
The signature of the program consists of one extensional predicate E and one
intensional predicate I . Both the predicates are binary.
The rules are:
It is convenient to think that E is a graph, and I is a kind of a pebble game:
by the initialization rule (i) we can start the game by placing both the pebbles in
any node which has a tail of length at least 2. By the rule (iii) we do not need to
distinguish between the pebbles. By rules (iv) and (v) we can always move one of the
pebbles to a neighbouring node, and finally, if the two pebbles meet in node that is
the end of a tail of length at least two then we can by the rule (ii), move the pebbles
to any two nodes.
We prove that the program is program bounded but not uniformly bounded:
Lemma 1.1. For a database D such that the input predicate I is empty, either
there are no proofs in D or for each pair D;E of elements of D the fact I(D; E) can
be proved in no more than 7 derivation steps
Proof. We consider two cases:
case 1: There are elements in D such that E(A; B) and E(B;C) hold.
Then, in the first step, we use the rule (i) to prove I(A; A). Then, using twice the
rule (v) we get I(C; A). Then use the rule (iii) to get I(A; C) and twice the rule (v)
to get I(C; C). Finally, the rule (ii) can be used to derive I(D; E).
case 2: There are no such elements in the database D. Then, since I is
given as empty, no proofs at all are possible.
The structure of the proof of the Lemma 1.1 above, as well as the structure of the
program itself, is a good illustration of one of the ideas of the proofs in Sections 3 and
4. The program contains some initialization rule (rules) which allows to start a kind
of game, or computation, if only there exists a substructure of required form in the
database. Then, if there is enough of facts in the database we can proceed with the
computation and, when it terminates, use an analogon of the rule (ii) to "flood" the
6 J. MARCINKOWSKI
database. Otherwise, if there is no enough of facts then only short proofs are possible
(or no proofs at all, as in the example).
Lemma 1.2. For each constant c there exist a database D with nonempty input
predicate I, and elements of D such that I(A; B) is P-provable, but the shortest
proof of the fact requires more than c steps.
Proof. The database contains the elements: C and the following
facts: E(C
We will show that the fact I(C 2c ; C 1 ) is provable, and the shortest proof
has exactly steps. First we show that such a proof exists: in the k'th step use
the already proved fact derive I(C The rule used in the k'th
derivation step is (v) if k is odd and (iv) if k is even.
To show that shorter proofs are not possible notice that only the bodies of the
rules (iii)-(v) can be satisfied in D, and so only those rules can be used.
Define the distance between nodes of D as follows: the distance between A and A
is 0, and the distance between A and B is less or equal k if and only if there exists a
node C such that either E(B;C) or E(C; B) and the distance between A and B is at
most 1. The distance between a pair of nodes A; B in D and a pair of nodes C; D
in D is defined as a sum of distances from A to C and from B to D. Now, notice that
if a fact of the form I(A; B) is derived in k steps from the fact I(C; D), and if only the
rules (iii)-(v) were used in the proof then the distance between A; B and C; D is not
greater than k. Finally, observe that the distance in D between C
1.7. Program Boundedness vs. Uniform Boundedness. Discussion.
The notions of uniform and program boundedness formalize, on the technical level,
the informal notion of boundedness. Uniform boundedness is what we need when the
program under consideration is a subprogram of a bigger one. Then, it can happen,
that the predicates that are supposed to be the output of the program are also a part
of the input. Program boundedness, on the other hand, corresponds to the view of an
entire DATALOG program as a definition of, possibly many, output predicates. This
is similar to the distinction between program and uniform equivalence of DATALOG
programs (see [21]), where again the first notion applies to entire programs, while
the second one to subprograms equivalence. It is known that program equivalence is
undecidable, while uniform equivalence is decidable [5],[21],[22]]. We can observe that
also for the case of boundedness, the uniform version, for given syntactical restrictions
is a priori "more decidable": Suppose that program boundedness is decidable
for some syntactical restrictions, and that the restrictions allow arbitrary number of
initializations. Then uniform boundedness is also decidable for the restrictions. To
see that consider a program P over a signature with IDB symbols I i , where 1
Let Q be the program P with its signature enriched with new EDB symbols
and for each i the arity of E i is equal to the arity of I i , and with k new
rules:
I
It is easy to see that Q is program bounded if and only if P is uniformly bounded.
So we reduced the decision problem of the uniform boundedness of P to the problem
of program boundedness of the program Q.
The survey of results gives an evidence that it is more difficult to prove undecidability
of uniform boundedness than undecidability of program boundedness, the
argument above shows that there are reasons for that. But on the other hand we
do not know any example of syntactical restrictions for which uniform boundedness
ACHILLES, TURTLE, AND SMALL DATALOG PROGRAMS 7
would be decidable and program boundedness not. The most likely candidate for the
example is the class of linear binary programs. Program boundedness is undecidable
for the class and decidability of uniform boundedness is open.
2. Achilles-Turtle machine.
2.1. The tool: Conway functions.
Definition 2.0. A Conway function is a function g with natural arguments
defined by a system of equations:
a
a
a
where a i ,q i are natural numbers, q i ji (that means, that i=q i is a natural number),
and q i jp for each i and (a i for each i.
For a Conway function g and given natural number N let C(g; N) be a statement
asserting that there exists a natural number i such that g i
See Section 2.3 to find a nice example giving an idea of what a Conway function.
is. Proof of the following theorem can be found in [4], in [14] or in [7].
Theorem 2.1. (Conway)
The problem:
given a Conway function g, and a natural number N . Does C(g; N) hold ?
is undecidable.
Our main tool is the following refined version of Theorem 2.1 :
Theorem 2.2.
1. There exists a computable sequence fgn g of Conway functions such
2)g is not recursive (is r.e. complete).
(ii) For each g n , if a i and q i are coefficients from the definition of the function
2.
(iii) For each g n , if there are such
2. There exists a universal Conway function g, such that
(i) the set fN : C(g; 2 N )g is not recursive (is r.e. complete).
(ii) if a i and q i are coefficients from the definition of the function g then
(a 2.
(iii) For each N , if there are such
Proof.
1. It is known that the problem: given a finite automaton with 2 counters, does
the computation starting from fixed beginning state, and from empty counters
reach some fixed final state is undecidable, even if we require that the final
state can be reached only if the both counters are empty (read the remark in
the end of this section to see what precisely we mean by a finite automaton
with 2 counters).
For a given automaton A of this kind we will construct a Conway function g A
which satisfies conditions (ii) and (iii) of the theorem and such that C(gA ; 2)
holds if and only if the computation of A reaches the final state. First we
8 J. MARCINKOWSKI
need to modify A a little bit: we construct an automaton B which terminates
if and only if A terminates and which satisfies the following conditions:
(iv) the second counter of B can be increased only if the first counter is
decreased in the same computation step,
(v) the states of B are numbered. If any of the counters is increased in the
computation step when the state s i is being changed into s j then
The details of the construction of B are left as an easy exercise. The hint is
that all what must be done is adding a couple of new states. For example if
there is an instruction of A which increases the second counter and keeps the
first unchanged, it must be substituted by two instructions: first of them only
increases the first counter and changes the state into a new one, the second
increases the second counter and decreases the first.
Now, suppose that the states of the automaton B are s
is the beginning state. Let be an increasing sequence of primes
such that (such a sequence can be found for each k, since
the density of primes around n is c= log n). We encode the configuration of
B:
state is s i , the first counter contains the number n and the second counter
contains m
as the natural number 2 It is easy to notice that, if x and y are codes
of two subsequent configurations of B then y=x depends only of the remainder
x (mod p) where and that y=x - 2. So we can define the
required Conway function. To define the first step properly we put a
and which is the code of the beginning configuration.
We put also a p f
reach 1 in the iteration of the function
next to the one when the code of the final configuration is reached.
2. We use the well known fact that there exists a particular finite automaton with
counters for which the problem does the computation starting from a fixed
beginning state s b , given first counter, and empty second counter, reach the
configuration of some fixed final state s f and empty counters is undecidable.
Then the proof is similar as of (i). To start the computation properly we
put a all such even i that p j ji does not hold for any
for each N it holds that g(2 N . The last is the code
of the beginning configuration.
Remark: Automata with counters. Our notion of a finite automaton with
two counters is similar to the one in Kozen's book [13], with the difference that
we assume that the automaton has no input tape. Since two counter automata (with
read-only input tape) are as powerful as Turing machines the problem whether a given
automaton of this kind will terminate for given input is undecidable. But, for each
input separately, we can hide the input in the finite control of the automaton (in fact
the input tape is a finite object for each input). So also the problem whether a given
automaton without input tape will terminate, when started from a fixed beginning
state and from empty counters is undecidable. Now we show, as it is needed in
the proof of the second claim of Theorem 2.2, that there exists a particular finite
automaton with 2 counters for which the problem does the computation starting from
a fixed beginning state s b , given first counter, and empty second counter, reach the
configuration of some fixed final state s f and empty counters is undecidable. First
observe that there exists an automaton as required but with 3 counters: it is universal
ACHILLES, TURTLE, AND SMALL DATALOG PROGRAMS 9
Turing machine with the contents of the part of the tape left of the head remembered
on one counter, right on the head on the second counter, and with auxiliary third
counter needed for operating the first two. Then use the standard techniques to
encode the three counters as two. See [13] for details.
Convention 2.3. Since now we consider only Conway functions g n where
existence was proved in Theorem 2.2.i. In particular we assume that the
claims (ii) and (iii) from Theorem 2.2.i. hold.
2.2. Achilles-Turtle machine. For a given Conway function g and given input
N we will construct an Achilles-Turtle machine, which will compute the subsequent
iterations of g(N ).
It is a variant of a multi-head Turing Machine, with read-only tape. Each cell
of the tape is coloured with one of the colours K 0 , K (where p is as in
the definition of the function g). If the cell X is coloured with the colour K i (we
denote the fact as K i (X)) and the cell S(X) (S is a successor function on the tape)
is coloured with K j then p). The colour K 0 will be called white and
called red.
There are 3 heads. The first of them symbolizes Achilles. The second is the
Turtle. The third is called Guide. The transition rules will be designed in such a way,
that the heads will never go left. Achilles and Guide will move right in each step of
the computation. Achilles will try to catch the Turtle.
The configuration of the machine is described by the positions of the heads. In the
beginning of the computation Achilles is in some arbitrary white cell X on the tape.
The Turtle and Guide are both is in the cell S N (X). So the beginning configuration is:
CON(X;S N (X); S N (X)).
Where again S is the successor function on the tape.
The idea is, that the computation can reach a configuration of a form
or Achilles can be exactly k cells behind the Turtle, if g i
In each computation step the heads of the machine move according to one of the
following transition rules
(R
since a i =q
Rules (R i ) are run rules and rules (J i ) are jump rules. Configurations of the form
are called special.
See Section 2.3 for a nice example of Achilles-Turtle machine. The following easy
lemma gives an intuition of how the computation of the machine proceeds:
Lemma 2.4.
(i) If, in some configuration of the machine, the Turtle is in the cell X and the
Guide is in Y then
(ii) If, in some configuration of the machine, the Turtle is in the cell X and
Achilles is in some S k (X) where none of the jump rules will be used later
J. MARCINKOWSKI
in the computation.
(iii) Suppose that in some configuration of the machine Achilles is in some cell
X, Turtle is in some S t (X) and the Guide is in S r (X). If one of the jump rules can
be used later, then
(iv) A special configuration can only be a result of a transition done according to
one of the jump rules.
(v) Achilles is always in a white cell.
(vi) If in some configuration of the machine the Guide is in the cell X then in
the next configuration he will be in S r (X) for some
Proof. (i) The claim is true for the beginning configuration and for every configuration
being a result of a use of a jump rule. The run rules move the Guide right
and keep the Turtle in his cell.
(ii) If Achilles is right of the Turtle then the jump rule can not be used. But the
run rules only move Achilles right.
(iii) follows from (i) and (ii)
(iv) By (i) the Guide can never be left of the Turtle. The run rules move him
right, so after the execution of a run rule he is right of the Turtle.
(v) He starts in a white cell and moves p cells right in each step.
(vi) That is since hold for every i (see
Convention 2.3).
Now we will formulate and prove some lemmas about the equivalence between the
behaviour of the Conway function and the result of the computation of the Achilles-
Turtle machine. Our goal is:
Lemma 2.5. The following conditions are equivalent:
(i) C(g; N) holds.
(ii) The Achilles-Turtle machine can reach a configuration of a form
(iii) The Achilles-Turtle machine can reach a configuration of a form
CON(A;S(A);S(A)).
(iv) The Achilles-Turtle machine can reach a configuration of a form
Lemma 2.6. Suppose in some special configuration of the machine Achilles is in
some cell A, and Turtle and Guide are in some
(i) after k steps the configuration will be CON(S \Gammai (T ); T
(ii) there are exactly two configurations that can be reached after k
and
Proof. (i) Each of the k steps will be done according to the rule R i . So, after k
steps Achilles will be in the cell S kp will be in S kp(a i =q i
the Turtle in T .
(ii) By (i), CON(S \Gammai (T reached after k steps. Then the rule
R i may be used once again, what leads to CON(S p\Gammai (T
the rule J i may be used. That leads to
Lemma 2.7. Suppose in some special configuration of the machine Achilles is
in the cell A, and Turtle and Guide are in some
p. Then the following two conditions are equivalent:
ACHILLES, TURTLE, AND SMALL DATALOG PROGRAMS 11
(i) it is possible to reach a special configuration CON(X;S l (X); S l (X)) as the
next special configuration.
Proof. By Lemma 2.6 (ii) the configuration after k steps will be either
or
In the first case Achilles will be already right of the Turtle and, by Lemma 2.4
(ii),(iv), a special configuration will not be reached any more.
To prove the equivalence we show that the configuration reached in the second
case is just of the form
CON(X;S g(m) (X); S g(m) (X)).
In fact:
CON(X;S g(m) (X); S g(m) (X)).
Lemma 2.8. The following two conditions are equivalent:
(i) The Achilles-Turtle machine can reach a configuration of the form
CON(X;S l (X); S l (X)).
(ii) There exists a natural number j such that g j
Proof. The (i))(ii) implication is proved by induction on the number of special
configurations reached during the computation.
The (ii))(i) implication is proved by induction on j.
In both cases Lemma 2.7 is used for the induction step.
Proof. Of Lemma 2.5:
(i), (ii) and (iii) are equivalent by Lemma 2.8 and Convention 2.3 (claim (iii) of
Theorem 2.2.i). Clearly, (ii) implies (iv). Also (iv) implies (ii): If a configuration
reached after some number of steps, and K 1 (T ) holds, then consider
the configuration after the last step of the computation which was done according
to a jump rule (the last step when the Turtle was moved). This configuration is
2.3. Achilles-Turtle Machine. An Example. In order to give the reader an
idea of how the machine works we are going to provide a nice example of a Conway
function (or rather Conway-like function) and of the Achilles-Turtle machine built
for this function. The function g that we start from will be the well-known Collatz
J. MARCINKOWSKI
function: take a natural number, if it is even then divide it by two, if it is odd then
multiply it by three and add one. The problem if the iterations of the procedure give
finally the result 1, regardless of the natural number that we start from, is open. More
formally, in the spirit of Definition 2.0 we can define function g as;
ae
And the open problem is then whether
We do not only multiply the number, but also add 1, so this is not really a Conway
function in the sense of Definition 2.0 but we anyway find this example to be inter-
esting, and we can, and will, construct our Achilles-Turtle machine for this function:
The rules of the Example Achilles-Turtle machine
initial configuration: CON(X;S N (X); S N
transition rules:
run rules:
jump rules:
final configuration: CON(X;S(X);S(X)):
The coefficients in run rules and in the white jump rule are here calculated according
to the definitions of (R i ) and (J i ) from the beginning of Section 2.2. The left
hand side of the red jump rule is not CON(S 2 (A); S 4 (G); S 4 (G)), as it would follow
from the definition: this is the place where we add 1 form the 3n + 1.
Now suppose, for concreteness of the example, that N is 5. Then the subsequent
iterations of g are: 5, 16, 8, 4, 2, 1. The beginning configuration of the machine will
be then CON(A;S 5 (A); S 5 (A)) for some white cell A and the computation sequence
of the machine is:
CON(S 4 (X); S 5 (X); S 17 (X)) (RR?) CON(S 26 (X); S
CON(S 6 (X); S 22 (X); S 22 (X)) (RJ?) CON(S 28 (X); S
CON(S 8 (X); S 22 (X); S 23 (X)) (WR?) CON(S
CON(S 12 (X); S 22 (X); S 25 (X)) (WR) CON(S 34 (X); S 38 (X); S 38 (X)) (WJ)
CON(S 14 (X); S 22 (X); S 26 (X)) (WR) CON(S 36 (X); S 38 (X); S 39 (X)) (WR)
ACHILLES, TURTLE, AND SMALL DATALOG PROGRAMS 13
CON(S
CON(S
CON(S 20 (X); S 22 (X); S 29 (X)) (WR) CON(S 42 (X); S 42 (X); S 43 (X)) (WR)
CON(S 22 (X); S 22 (X); S
where RR means that red run rule was used to obtain the configuration, WR is
white run rule, RJ red jump rule, and WJ is white jump rule. The configurations
marked with ? are depicted below (Fig. 1).
3. Ternary programs.
3.1. The ternary linear program P.
Theorem 3.1. For each Conway function g n from Theorem 2.2.i there exists,
and can be efficiently constructed, an arity 3 linear DATALOG program P with one
IDB predicate which is uniformly bounded if and only if C(gn ; 2) holds.
The signature of the program contains one binary EDB symbol S, which is going
to serve as a kind of order for us, p monadic EDB symbols which will play as the
colours and a ternary IDB symbol CON . The program P consists of :
Transition rules (for each
Flooding rule: CON(X;Y;Z):-CON(S;T ; R); K 1 (T
where Km is understood as K i if m j i (mod p). Since S is no longer a true successor
we must explain the meaning of the S l symbols in the rules:
Notational Convention: (for example) a rule:
should be understood as:
CON(X2;Y 4; X1):-
CON(X;X1;Y
Let us explain the meaning of the rules: The transition rules are the same as in
the Achilles-Turtle machine, with the exception, that they check if the cells (nodes)
that Achilles runs over are painted properly. The flooding rule proves everything in
one step if Turtle is in a red node. The initialization allows to start the computation
in each (white) node, if there is a properly coloured piece of tape near the node.
Lemma 3.2. If C(gn ; 2) does not hold, then for each c there exists a database D
and a tuple A; of elements of D such that CON(A;T ; G) can be proved in D with
P and the proof of CON(A;T ; G) requires more than c steps.
14 J. MARCINKOWSKI
A
A
A
A
A
A
A
A
A
A
A
A
G
G
G
G
Fig. 2.1. Subsequent configurations of the example Achilles-Turtle machine
Proof. D is just a long enough S-chain (see Definition 3.4 below) with empty IDB
relation. First we prove that the flooding rule can not be used in such a database,
provided that C(gn ; 2) does not hold. Suppose it can be used. That means that
can be proved for some red T . If we follow the proof of CON(A;T ; G)
in D we will notice, that it gives a legal computation of the Achilles-Turtle machine
and that the first fact in the proof is the beginning configuration of the machine.
ACHILLES, TURTLE, AND SMALL DATALOG PROGRAMS 15
That is a contradiction by Lemma 2.5. Now, take the first element Z of the order.
By initialization we have CON(Z;S 2 (Z); S 2 (Z)). Using 2c times the run rule 2 we
get the shortest proof of the fact
CON(S 2pc
Now we are going to prove:
Lemma 3.3. If C(gn ; 2) holds then there exists c such that, in any database D,
for every tuple A; B; C of elements of D if CON(A;B;C) can be proved in D with
the program P then there exists a proof of CON(A;B;C) shorter than c steps.
Proof. Suppose holds. That means that, if we start the computation
of the Achilles-Turtle machine in a configuration CON(X;S 2 (X); S 2 (X)) then it is
possible to reach a final configuration CON(A;S(A);S(A)).
Notice, that during the computation, none of the heads will move left of X or
right of S(A). Let K be the distance between X and S(A) and let K' be the number of
steps of the computation necessary to reach the final configuration. Clearly pK'+1=
K. We are going to prove, that c=K'+2 is the proper constant.
We will need some definitions:
Definition 3.4. An S-chain of elements of a database D, is a set X
such that S(Xm ; Xm+1 ) and Km (Xm decreasing S-chain
of elements of the database D, is a set X
and Km (Xm (where Km should be understood as K
(mod p)). In both cases we say that the chain begins in X 0 .
Definition 3.5. Let k be a natural number. We say that a node W of a database
D is not k-founded if there exists a decreasing S-chain which begins in W and consists
of more than k elements. W is k-founded if such a chain does not exist.
Definition 3.6. Let k be a natural number. We say that a database D is not
k-founded if there exists an S-chain consisting of more than k elements.
Obviously, D is k-founded if such a chain does not exist.
Now we consider 2 cases:
Lemma 3.7. If D is not K-founded then for each tuple A; B; C of elements of
D the fact CON(A;B;C) can be proved and the proof requires no more then K'+2
steps.
Proof. Take X such that there exists an S-chain of length K beginning in X .
Thanks to the initialization rule CON(X;S 2 (X); S 2 (X)) is provable in D and has a
proof of length 1. Now we can pretend that the chain from X to S K (X) is a tape and
start a computation of the Achilles-Turtle machine. Since the transition rules of the
machine are rules of program P , each step of the computation can be encoded by one
step of proof. So there exists an element T of the chain such that S(T ) is red, and
can be proved after K'+1 steps. One more step (using flooding)
is needed to prove CON of every tuple after that.
Lemma 3.8. Let D be a K-founded database.
i=0 is a P-proof in D. If the flooding rule is not
used in the proof then m -K'.
i=0 be a P-proof in D, and suppose it is the shortest
possible proof of CON(Am ; Bm ; Cm ). Then the flooding rule is used at most once, for
the last step of the proof.
(iii) If CON(A;B;C) can be P-proved in D for some tuple A; B; C, the proof
requires no more than K
Proof. (i) The set A i , m) is a subsequence of an S-chain of length pm.
J. MARCINKOWSKI
(ii) Suppose that the step from
done according to the flooding rule. Then
is a shorter proof of CON(Am
(iii) It follows from (i) and (ii).
This ends the proof of Lemma 3.3 and of Theorem 3.1.
Theorem 3.9. Uniform boundedness of ternary linear DATALOG programs is
undecidable.
Proof. This follows from Theorems 2.2 and 3.1.
3.2. The arity 5 single recursive rule program R.
Theorem 3.10. For each Conway function g n from Theorem 2.2 i. there exists,
and can be efficiently constructed, an arity 5 DATALOG program R consisting of one
quasi-linear recursive rule and of some initializations, which is uniformly bounded iff
holds.
As in the previous subsection the signature of the program contains one binary
EDB symbol S, which is going to serve as a kind of order for us, p monadic EDB
symbols which will play as the colours and a ternary IDB symbol CON . There is also
additional IDB symbol STEER of arity 5. The program R consists of:
The recursive rule:
G;
Initialization "transition" rules (for each
G;
G; S d i (G); S d i (G)):-K i (S i (A)):
The initialization "flooding" rule: STEER(X;T ; Y; R; S):-K 1 (T
The initialization: CON(A;S 2 (A); S 2 (A)):-K 0 (A); K 1
Let us explain what is going on here: The triples (Achilles, Turtle, Guide) are
nodes of a graph defined on D 3 by means of order and colouring. Each proof, either
from program P or from R is a path in the graph. The graph is not given by the EDB
relations but can be defined from them by a DATALOG program without recursion.
If we want to define the vertices beginning in a node A; T ; G "on-line", when the
computation reaches the node (as in P), then we must use more than only one rule,
but the rules are linear: they read nothing more than just the information about
the EDB situation around. If we define the graph in advance (by initializations),
then one recursive rule is enough: we have a "graph accessibility" program in this
case. But the rule is only "quasi-linear ": it makes use of the additional IDB (but
not recursive) predicate STEER. If I were the reader I would ask a question here:
why the STEER predicate is not of arity 6 ? Why do not we want to hide the
rule for Achilles in the initializations and have a simpler recursive clause ? In fact,
some additional problems arise here, since we do not have a flooding rule for Achilles.
We were forced to design the recursive rule in this way because of the uniformity
ACHILLES, TURTLE, AND SMALL DATALOG PROGRAMS 17
reasons. It is crucial, that Achilles goes down the chains. Thanks to that we can say:
no long chains, no long proofs (Lemmas 3.8 and 3.12, case 1). We could write the
initializations of the hypothetical 6-ary STEER in such a way, that Achilles would
move only down the chains, while running according to the STEER facts proved by
the initializations. But we would have no control of what is given as the STEER at
the beginning.
Lemma 3.11. If C(gn ; 2) does not hold, then for each c there exists a database
D and a tuple A; B; C of elements of D such that CON(A;B;C) can be proved in D
with a program R and the proof of C(A; B; C) requires more than c steps.
Proof. As Lemma 3.2.
Lemma 3.12. If C(gn ; 2) holds then there exists a c such that, in every database
D, for every tuple A; B; C of elements of D if CON(A;B;C) can be proved in D with
R then there exists a proof of CON(A;B;C) shorter than of c steps.
Definition 3.13. If A; B; C is a tuple of elements of the database D then we
say that CON(A;B;C) is a fact about A.
Proof. of Lemma 3.12: Suppose C(gn ; 2) holds. Then there is a computation of
the Achilles-Turtle machine starting in some configuration CON(X;S 2 (X); S 2 (X))
and reaching CON(Y;S(Y ); S(Y )). The computation requires space kp (that is the
distance from X to Y is kp), for some natural k. We will consider 2 cases:
Case 1: A is 1)p-founded. Then every proof of a fact about A is no longer than
1. That is because of the Achilles' part in the recursive rule. This is analogous to
Lemma 3.8 (i).
Case 2: C is not
take V such that A = S (k+1)p (V ) and there is a chain of length (k + 1)p from
V to A. Because of the initialization rule CON(V;S 2 (V provable in D and
has a proof of length 1. Now we can pretend that the chain from V to S kp+1 (V ) is a
tape and let Achilles and Turtle play their game there. Among other rules possibly
given by the predicate STEER they have also the "standard" Achilles-Turtle machine
rules. So, after k moves the configuration CON(S kp (V
will be reached and we will be allowed to use the flooding. Every fact of the form
CON(A;B;C) will be proved in one step. So no new facts about A can be proved
later. Of course nothing new about the IDB predicate STEER can be proved after
the first step.
This ends the proof of Lemma 3.12 and of Theorem 3.10.
3.3. The ternary single recursive rule program Q.
Theorem 3.14. For each Conway function g n from Theorem 2.2.i. there exists,
and can be efficiently constructed, an arity 3 DATALOG program Q consisting of one
quasi-linear recursive rule and of some initializations, which is uniformly bounded iff
holds.
Similarly as in the previous subsection the signature of the program contains one
binary EDB symbol S, which is going to serve as a kind of order for us, p monadic
EDB symbols which will play as colours and a ternary IDB symbol CON . The graph
which was defined by a arity 5 relation in the previous section will be defined here
as an intersection of four graphs defined by ternary constraints. So, we will have
four additional ternary IDB symbols in the
language of the program. The rules of the program Q are:
The recursive rule:
J. MARCINKOWSKI
G; G 0
The initialization "constraints" rules:
(G;
For each i (0 there is a rule
(G;
For each i (0 there is a rule
For each i (0 there is a rule G; S d i (G)):
For each i (0 there is a rule
G; S p(a i
G; G 0 ):-K 1 (T
The initialization:
To prove the correctness of the construction we shall argue that the ternary relations
really define the same graph as the relation STEER of the last section. It is easy
to notice that if STEER(A;T ; G; T can be proved by one of the initializations of
G; G 0 ) and
can also be proved. For the opposite inclusion, suppose that T is not red. We first
consider the relation E G;T;T 0 . Since the Guide "does not see" how far from each other
Achilles and Guide are, the constraint allows the Turtle to stay in the same place or to
jump according to the proper jump rule. It is the relation E A;T;T 0 that decides if the
Turtle will be allowed to jump. If Achilles is far away, then the Turtle can only wait.
If Achilles is about to catch the Turtle, then the Turtle is allowed to jump (see [1])
anywhere. But, because of the relation E G;T;T 0 this "anywhere" can be only S d i (G),
for the proper i. In this way already the first two relations force the Turtle to behave
as he should.
The relation forces the Guide to move ahead. It allows the Guide to
execute his jump rule but only if the Turtle jumps together with him (this prevents
ACHILLES, TURTLE, AND SMALL DATALOG PROGRAMS 19
the danger that Guide jumps while the Turtle runs). Whatever the Turtle is doing,
the Guide is allowed to use his proper run rule. There is a danger here that Turtle
will jump, and the Guide will only run, what is not allowed by the Achilles-Turtle
machine rules. That is prevented by the relation E T;T 0 remains in the
same place then the Guide is allowed to go anywhere. But if he moves, then the Guide
must join him.
If T is red then the constraints allow the Guide and Turtle to go anywhere.
Theorem 3.15.
Uniform boundedness of single recursive rule ternary DATALOG programs is undecidable
Proof. It follows from Theorems 2.2.i and 3.14.
Remark: We could use Theorem 2.2.ii. instead of i. and get more "universal"
DATALOG programs. For example Theorem 3.1 would then have the form:
There exists an arity 3 linear DATALOG program P with one IDB predicate and
a computable sequence fp(N)g of initialization rules, such that the program P [ p(N)
is uniformly bounded if and only if C(g; n) holds, where g is the universal Conway
function from Theorem 2.2. ii.
In fact this is the form from [15]. It can not however be done in Section 4, so
we decided not to present the results in the most general versions but to preserve the
notational uniformity instead.
4. Single rule programs.
4.1. Constants: notational proviso. In Sections 4.2 - 4.5 we are going to
encode the computation of Achilles-Turtle machine into a very small number of rules
(one or two). We can not afford having a separate predicate for each colour any more.
Instead, we are going to have one binary predicate COL, and understand COL(C,A)
as "the colour of A is C". So, instead of predicates we need to have constants to name
colours.
There are no constants in DATALOG. But in fact, if we want to use some (say,
constants we can simply increase the arity of all the IDB symbols with k and write
a sequence C of variables as the k last arguments in each occurrence of
each IDB predicate in the program. This is one of the reasons why the programs in
the following sections are of high arity.
Example: The program
with constants a; b can be written as:
means "P (X; Y ) if the constants are understood as A; B".
Thanks to that we can suppose that there are constants in the language. We
will use the following constants: jump, run, joker, constants for colours: colour i ,
will be also called white, colour 1 will be red, and
colour 2 will be pink).
J. MARCINKOWSKI
4.2. The Achilles-Turtle game . In this section we will modify the description
of the Achilles-Turtle machine and define its equivalent version with only one transition
rule. To make our notation compatible with the database notation we are going
to forget about the tape, and use a kind of infinite graph instead. To distinguish, the
version of the machine will be called the Achilles-Turtle game.
The transition rules of the Achilles - Turtle Machine are indexed with three pa-
rameters: first of them is either jump or run, the remaining two are colours of the
Turtle's cell and the Guide's cell before the transition. The idea of what is going on
bellow is to treat the parameters as arguments occurring in the goals of the body of
the single rule. While solving the first four goals of the body we will substitute proper
parameters for the variables COND, TCOLOR and GCOLOR. Then the parameters
will be used to compute the positions of Achilles, Turtle and Guide after the execution
of the rule.
The following definition introduces the predicates that will be used in the construction
of the single rule. We do not hope that the reader will understand the
definition until he reads the proof of Lemma 4.2.
Definition 4.1. For a given Conway function g, as in Theorem 2.2 i. the
Achilles-Turtle graph is the relational structure G with exactly the nodes and the relations
listed below:
the nodes of G are: colour joker and an infinite
sequence of nodes c
holds for each node c i .
for each i if
COL(red; joker) holds.
(iv)
holds for each i.
holds for each i j 0 (mod p) and for each 0 -
(v)
holds for each i.
holds for each i.
holds.
(vi)
holds for all
ACHILLES, TURTLE, AND SMALL DATALOG PROGRAMS 21
holds for all
(vii)
holds
but
holds for all k.
holds for all k.
(viii)
holds
holds
(ix)
holds for each i and for each colour.
holds for each i.
holds for each i and for each colour.
TRULE(run; red; joker; joker) holds.
holds for each i.
holds for each i.
holds.
The set of the nodes c i , of the graph can be in a natural way understood
as a tape of Achilles-Turtle machine. Notice that all the facts from the definition
are "local" in the sense that if some elements c i and c j are directly connected by a
fact then and there is no white node between c i and c j .
Now we are going to use the relations of the Achilles-Turtle graph to encode all
the rules of the machine in only one transition rule:
CONF
where BODY is the conjunction of the following facts:
22 J. MARCINKOWSKI
COL(TCOLOR;T
G;
Lemma 4.2. Suppose T is not red. Then can be computed from
in a single computation step of the Achilles-Turtle machine if and
only if CONF can be reached from CONF (A; T ; G) in a single step of the
Achilles-Turtle graph game.
Proof. It is clear that the move of Achilles is performed in the same way by the
machine and the game (he simply moves p cells ahead). We should check that this is
also the case with the Turtle and the Guide.
The "only if" direction is easier: if the transition of the machine has been done
according to the run rule then substitute run for the variable COND. Else substitute
jump. For the variable TCOLOR substitute the colour of the Turtle's cell. For the
variable GCOLOR substitute the colour of the Guide's cell. Now, if there is no
white node between G and G 0 then substitute . If there is exactly one
such white node then substitute the node for X 1 and . If there are two such
nodes then substitute the first of them for X 1 and the second for X 2 (notice that by
condition (ii) from Theorem 2.2. i. there are at most two white nodes between G
and G 0 ). The "GRULE" goals in the lines (v)-(vii) of the BODY are satisfied in this
way. If COND is run then substitute T for X 3 (notice that in this case
COND is jump then substitute joker for X 3 . The two "TRULE" goals of the BODY
are satisfied in this way. To satisfy the last two goals substitute joker for X 4 if the
COND is run and if the COND is jump then substitute G 0 for X 4 .
For the "if" direction first notice that if the conjunction of
can be satisfied, then really the distance between Achilles and the Turtle, before
the transition, is smaller than p, and so jump is allowed according to the Achilles-
Turtle machine rules. The rules for the Guide (defined by claims (vi), (vii) and (viii)
of Definition 4.1) assure that if only COND, TCOLOR and GCOLOR are chosen in
a fair way then the Guide of the game moves in the same way as the one from the
machine. The Turtle now: if the COND is run then he should not move, and in fact
the conjunction
can be satisfied only if
If the COND is jump then the Turtle should go to the same node where Guide
does. The conjunction
can be satisfied for every choice of joker is substituted for X 3 , but
the conjunction:
ACHILLES, TURTLE, AND SMALL DATALOG PROGRAMS 23
is satisfied only if X
The following lemma is much easier to prove than Lemma 4.2. and is left as an
exercise for the reader.
Lemma 4.3. If c t is red then each configuration of the form CONF
can be reached from CONF in a single step of the graph game.
Hint: Put COND equal to run and TCOLOR equal to red.
4.3. Single linear rule program with initialization. In this section we will
use the Achilles-Turtle game to construct a DATALOG program with one linear recursive
rule and one initialization, which is uniformly bounded if and only if C(g; 2)
holds.
The EDB predicates of the program will be the same as the used in the Definition
4.1.
It will be one ternary IDB predicate CONFIG. The variables A; T ; G in a fact of
the form
should be, as usually, understood as Achilles, Turtle and Guide,
Consider a database D. We suppose that colour
run and joker occur in D. A Motorway will be a sequence of elements of the database
which can be used for playing the Achilles - Turtle game:
Definition 4.4. Suppose are elements of D. We say that the
sequence is a Motorway if
is a subgraph of
where ordered sets of elements are considered. G is the database from Definition 4.1.
So, for example, we require that S(X 3 hold in D, if
is a Motorway.
Now we are ready to write
the linear recursive rule of program
CONF
Where is the conjunction of facts needed for the sequence
to be a Motorway. Notice that the last lines the rule are exactly
the literals of BODY in which T 0 or G 0 occurs, with run substituted for COND and
red substituted for TCOLOR.
And
J. MARCINKOWSKI
the initialization of program
CONF
Thanks to the last lines of the recursive rule we can be sure that if the fact
CONF can be proved in one step from CONF IG(A; T ; G) then it can
also be proved in one step from each fact of the form CONF
is red (see the proof of Lemma 4.8 case 1).
Our next goal is to show that if a long proof using the recursive rule is possible
in some database D then there is a long Motorway in D.
Lemma 4.5. Consider a sequence:
of elements of a database. If, for each 0 - k - x \Gamma 1 the subsequence
is a Motorway then also the whole sequence is a Motorway.
Proof. The conditions (i)-(x) of Definition 4.1 are "local": if some elements
and c j occur in a condition then and there is no white node between c i
and c j .
Lemma 4.6. Suppose that
CONF
CONF
CONF
CONF
is a sequence of facts, such that if 0 -
CONF
can be derived from:
CONF
by a single use of the recursive rule.
Then there exists a sequence
of elements of the database which is a Motorway.
Proof. That follows from Lemma 4.5 and from the construction of the recursive
rule.
Definition 4.7. If A; T ; G is a tuple of nodes of the database D then we say that
is a fact about A.
Now we are ready to prove that If C(g; 2) holds then the program T is uniformly
bounded.
Lemma 4.8. If C(g; 2) holds then there exists a constant C such that in every
database D if the program T proves some fact, then the fact can be proved in no more
than C derivation steps.
ACHILLES, TURTLE, AND SMALL DATALOG PROGRAMS 25
Proof. If C(g; 2) holds then the Achilles-Turtle game can reach the configuration
CONF (X; S(X); S(X)) for some white X . S(X) is red then. Suppose the K moves
are needed to reach this configuration. and the nodes
of the machine graph left of Y or right of S(X) are not visited during the computation.
We are going to prove that K+2 is a good candidate to be C.
Consider an element A of D. There are 2 possibilities:
case 1:
There is a Motorway of length (K + 1)p in the database, such that A is its last node.
Suppose
is the Motorway. By the initialization rule
CONF
can be proved in one derivation step. During the next K derivation steps one can
simulate K steps of Achilles-Turtle game, and so after K+1 steps we derive
CONF
Since A \Gammap+1 is red one can argue like in the proof of Lemma 4.3 to see that in
the next derivation step we can prove
for each T 0 and each G 0 such
And, because of the last lines of the recursive rule, no other facts can be
proved about A.
case 2:
No such Motorway.
Then, by Lemma 4.6 every proof has less than K+3 steps.
We still need to show that if C(g; 2) does not hold then the program is unbounded.
Lemma 4.9. If C(g; 2) does not hold then for each constant C there exist a
database D, with empty input IDB relation, and a fact
which can be proved in the database but the proof requires more than C steps.
26 J. MARCINKOWSKI
Proof. It's enough to show that arbitrarily long proofs are needed in the Achilles-
Turtle game graph (we suppose that there are no IDB input facts). So start with
(that can be done by initialization) and use 2C times the run rule for Turtle in
a pink-coloured cell. Notice that the position of the Turtle will remain unchanged
during the computation and the final configuration will be
The shortest proof of the fact really requires 2C+1 steps (including initialization).
To summarize:
Theorem 4.10. Uniform boundedness and program boundedness are undecidable
for programs consisting of one linear rule and one initialization.
Proof. The problem: for given Conway function g, does C(g; 2) hold is undecid-
able, even for functions satisfying conditions (ii) and (iii) of Theorem 2.2.i. For each
such function we can construct a DATALOG program, with one linear rule and one
initialization which is not program bounded if C(g; 2) does not hold (Lemma 4.9) and
which is uniformly bounded if C(g; 2) holds (Lemma 4.8).
4.4. Single rule program: how one can not construct it. Now we would
like to modify the construction of the previous section and get a single rule program.
The only problem is how to initialize the predicate CONF IG. The simplest solution
would be not to initialize it at all, but just check, in the same way as we use the
"Motorway" goal in the body of the rule to check that the needed EDB facts hold.
So the rule should look like this:
CONF
CONF
In this way, one could think, we secure that it is possible to start the computation
of the Achilles-Turtle machine in each place, where any derivation step is made. But it
is not enough to go in the footsteps of the proof of Lemma 4.8. We require there, that
the initial configuration is not only provable, what is really secured by the would-be
rule above, but that it is provable in a bounded number of steps (in fact, just one
step, in the previous section). We are to think of a new trick to assure that.
4.5. Single rule program: how to construct it.
The single recursive rule S is:
ACHILLES, TURTLE, AND SMALL DATALOG PROGRAMS 27
CONF IG(run; Z; A
CONF IG(W; run; A; T ; G), main
premise
CONF IG(jump; run; A; A 2 ; A 2 ); initialization
premise
jump=run
premise
Where the constant joker occurs p times in the "predicate" Motorway. We have
added two additional arguments to the recursive predicate here. The rule asserts that
if something can be derived then its first argument is run. So, if only the constants
run and jump are not interpreted in the same way in the database, then the fact
can not be proved by the program, and if it is provable then it is provable in 0
steps (is given as a part of the input).
Also the fact
does not require a deep proof: if any proof at all is possible then the fact is given
in the input.
The "jump=run premise" is normally useless as the main or as the initialization
premise of a derivation step: it has "jump" as the second argument. But if run
and jump are equal in the database, then we use it to show that if anything can be
proved about A then everything can be proved about it in one step. That is why
must hold and why joker is red.
We use the methods of Section 4.3 to prove that the constructed single rule
program is uniformly bounded if and only if C(g; 2) holds:
Lemma 4.11. If C(g; 2) holds then there exists a constant C such that in every
database D if some fact can be proved with the rule S, then it has a proof no deeper
than C.
Proof. Let K be like in the proof of Lemma 4.8. We need to consider two cases:
case 1:
jump and run are different elements of the database.
28 J. MARCINKOWSKI
Suppose that for some A there is a fact about it which has a proof of length at
least K+2. Then, we follow the proof of Lemma 4.8: we use the fact that the needed
initialization has been given in the input, so it has a short (0-step) proof, and show
that everything can be proved about A in no more than K+2 derivation steps.
case 2:
jump and run are interpreted as the same element of the database.
Suppose that anything can be proved about some A p . Then
CONF IG(jump; jump; joker; joker; joker)
holds in the database. Since holds and since
joker is red, every fact of the form
CONF IG(run; Z; A
can be proved in one derivation step, if only
Lemma 4.12. If C(g; 2) does not hold then for each constant C there exist a
database D, and a fact
CONF IG(run; run; A
which can be proved, with the rule S, in the database D, but the proof requires
more than C steps.
Proof. We proceed in a similar way as in the proof of Lemma 4.9, with the
following differences:
(i) we no longer assume that the IDB input is empty. Instead, we require that
there are the following CONF IG facts in the input:
and, for each x - C:
CONF
(ii) we require that for each x - C
holds.
This ends the proof of
Theorem 4.13. Uniform boundedness of single rule DATALOG programs is
undecidable.
ACHILLES, TURTLE, AND SMALL DATALOG PROGRAMS 29
--R
Boundedness is Undecidable for Datalog Programs with a Single Recursive Rule
DATALOG versus First Order Logic
Parallel evaluation of recursive rule queries
Decidable Optimization Problems for Database Logic Programs
Halting Problem of One Binary Recursive Horn Clause is Undecidable
Undecidable Optimization Problems for Database Logic Programs
Undecidable Boundedness Problems for Datalog Programs
A Time Bound on the Materialization of Some Recursively Defined Views
Logic Programming and Parallel Complexity
Elements of Relational Database Theory in Handbook of Theoretical Computer Science
The 3 Frenchmen Method Proves Undecidability of the Uniform Boundedness for Single Recursive Rule Ternary DATALOG Programs
Undecidability of Uniform Boundedness for Single Rule Datalog Programs
On Recursive Axioms in Relational Databases
Data Independent Recursion in Deductive Databases
A Decidable Class of Bounded Recursions
On Computing Restricted Projections of Representative Instances
Some Positive Results for Boundedness of Multiple Recursive Rules
Decidability and undecidability results for boundedness of linear recursive queries
--TR
--CTR
Foto Afrati , Stavros Cosmadakis , Eugnie Foustoucos, Datalog programs and their persistency numbers, ACM Transactions on Computational Logic (TOCL), v.6 n.3, p.481-518, July 2005
Evgeny Dantsin , Thomas Eiter , Georg Gottlob , Andrei Voronkov, Complexity and expressive power of logic programming, ACM Computing Surveys (CSUR), v.33 n.3, p.374-425, September 2001 | query optimization;decidability;DATALOG |
330373 | Equivalence of Measures of Complexity Classes. | The resource-bounded measures of complexity classes are shown to be robust with respect to certain changes in the underlying probability measure. Specifically, for any real number $\delta > 0$, any uniformly polynomial-time computable sequence \ldots )$ of real numbers (biases) $\beta_i \in [\delta, 1-\delta]$, and for any complexity class ${\bf \cal C}$ (such as P, NP, BPP, P/Poly, PH, PSPACE, etc.) that is closed under positive, polynomial-time, truth-table reductions with queries of at most linear length, it is shown that the following two conditions are equivalent. (1) ${\bf \cal C}$ has p-measure 0 (respectively, measure 0 in E, measure 0 in E2) relative to the coin-toss probability measure given by the sequence ${\mv{\beta}}$.(2) ${\bf \cal C}$ has p-measure 0 (respectively, measure 0 in E, measure 0 in E 2) relative to the uniform probability measure. The proof introduces three techniques that may be useful in other contexts, namely, (i) the transformation of an efficient martingale for one probability measure into an efficient martingale for a "nearby" probability measure; (ii) the construction of a positive bias reduction, a truth-table reduction that encodes a positive, efficient, approximate simulation of one bias sequence by another; and (iii) the use of such a reduction to dilate an efficient martingale for the simulated probability measure into an efficient martingale for the simulating probability measure. | Introduction
In the 1990's, the measure-theoretic study of complexity classes has yielded
a growing body of new, quantitative insights into various much-studied aspects
of computational complexity. Benefits of this study to date include
improved bounds on the densities of hard languages [15]; newly discovered
relationships among circuit-size complexity, pseudorandom generators, and
natural proofs [21]; strong new hypotheses that may have sufficient explanatory
power (in terms of provable, plausible consequences) to help unify our
present plethora of unsolved fundamental problems [18, 15, 7, 16, 11]; and
a new generalization of the completeness phenomenon that dramatically
enlarges the set of computational problems that are provably strongly intractable
[14, 6, 2, 7, 8, 1]. See [13] for a survey of these and related developments
Intuitively, suppose that a language A ' f0; 1g is chosen according to
a random experiment in which an independent toss of a fair coin is used
to decide whether each string is in A. Then classical Lebesgue measure
theory (described in [5, 20], for example) identifies certain measure 0 sets
X of languages, for which the probability that A 2 X in this experiment
is 0. Effective measure theory, which says what it means for a set of decidable
languages to have measure 0 as a subset of the set of all such lan-
guages, has been investigated by Freidzon [4], Mehlhorn [19], and others.
The resource-bounded measure theory introduced by Lutz [12] is a powerful
generalization of Lebesgue measure. Special cases of resource-bounded
measure include classical Lebesgue measure; a strengthened version of effective
measure; and most importantly, measures in
polynomial ), and other complexity classes. The small subsets
of such a complexity class are then the measure 0 sets; the large subsets are
the measure 1 sets (complements of measure 0 sets). We say that almost
every language in a complexity class C has a given property if the set of
languages in C that exhibit the property has measure 1 in C.
All work to date on the measure-theoretic structure of complexity classes
has employed the resource-bounded measure that is described briefly and
intuitively above. This resource-bounded measure is based on the uniform
probability measure, corresponding to the fact that the coin tosses are fair
and independent in the above-described random experiment. The uniform
probability measure has been a natural and fruitful starting point for the
investigation of resource-bounded measure (just as it was for the investigation
of classical measure), but there are good reasons to also investigate
resource bounded measures that are based on other probability measures.
For example, the study of such alternative resource-bounded measures may
be expected to have the following benefits.
(i) The study will enable us to determine which results of resource-bounded
measure are particular to the uniform probability measure and which
are not. This, in turn, will provide some criteria for identifying contexts
in which the uniform probability measure is, or is not, the natural
choice.
(ii) The study is likely to help us understand how the complexity of the
underlying probability measure interacts with other complexity pa-
rameters, especially in such areas as algorithmic information theory,
average case complexity, cryptography, and computational learning,
where the variety of probability measures already plays a major role.
(iii) The study will provide new tools for proving results concerning resource-bounded
measure based on the uniform probability measure.
The present paper initiates the study of resource-bounded measures that
are based on nonuniform probability measures.
Let C be the set of all languages A ' f0; 1g . (The set C is often
called Cantor space.) Given a probability measure - on C (a term defined
precisely below), section 3 of this paper describes the basic ideas of resource-bounded
-measure, generalizing definitions and results from [12, 14, 13] to
- in a natural way. In particular, section 3 specifies what it means for a
set X ' C to have p-measure 0 (written -
measure 0 in E (written -(X
or -measure 1 in E 2 .
Most of the results in the present paper concern a restricted (but broad)
class of probability measures on C, namely, coin-toss probability measures
that are given by P-computable, strongly positive sequences of biases. These
probability measures are described intuitively in the following paragraphs
(and precisely in section 3).
Given a sequence ~
the coin-toss probability measure (also call the product probability measure)
given by ~
fi is the probability measure - ~
on C that corresponds to the
random experiment in which a language A 2 C is chosen probabilistically
as follows. For each string s i in the standard enumeration s
f0; 1g , we toss a special coin, whose probability is fi i of coming up heads,
in which case s i 2 A, and 1 \Gamma fi i of coming up tails, in which case s i 62 A.
The coin tosses are independent of one another.
In the special case where ~ the biases in the sequence
~
fi are all fi, we write - fi for - ~
fi . In particular, - 1
2 is the uniform probability
measure, which, in the literature of resource-bounded measure, is denoted
simply by -.
A sequence ~ of biases is strongly positive if there is
a real number ffi ? 0 such that each fi i 2 ffi]. The sequence ~
fi is P-
computable (and we call it a P-sequences of biases) if there is a polynomial-time
algorithm that, on input (s rational approximation
of fi i to within 2 \Gammar .
In section 4, we prove the Summable Equivalence Theorem, which implies
that, if ~ ff and ~
are strongly positive P-sequences of biases that are
"close" to one another, in the sense that
set
That is, the p-measure based on ~ ff and the p-measure based on ~
fi are in
absolute agreement as to which sets of languages are small.
In general, if ~ ff and ~
fi are not in some sense close to one another, then
the p-measures based on ~ ff and ~
fi need not agree in the above manner. For
example, if ff; fi 2 [0; 1], ff 6= fi, and
\Gamman
then a routine extension of the Weak Stochasticity Theorem of [15] shows
that - ff
Notwithstanding this example, many applications of resource-bounded
measure do not involve arbitrary sets X ' C, but rather are concerned
with the measures of complexity classes and other closely related classes of
languages. Many such classes of interest, including P, NP, co-NP, R, BPP,
AM, P/Poly, PH, PSPACE, etc., are closed under positive, polynomial-time
truth-table reductions (- P
pos\Gammatt -reductions), and their intersections with E
are closed under - P
pos\Gammatt -reductions with linear bounds on the lengths of the
queries
pos\Gammatt -reductions).
The main theorem of this paper is the Bias Equivalence Theorem. This
result, proven in section 8, says that, for every class C of languages that is
closed under - P;lin
pos\Gammatt -reductions, the p-measure of C is somewhat robust with
respect to changes in the underlying probability measure. Specifically, if ~ ff
and ~
are strongly positive P-sequences of biases and C is a class of languages
that is closed under - P;lin
pos\Gammatt -reductions, then the Bias Equivalence Theorem
says that
ff
To put the matter differently, for every strongly positive P-sequence ~
fi of
biases and every class C that is closed under - P;lin
pos\Gammatt -reductions,
This result implies that most applications of resource-bounded measure to
date can be immediately generalized from the uniform probability measure
(in which they were developed) to arbitrary coin-toss probability measures
given by strongly positive P-sequences of biases.
The Bias Equivalence Theorem also offers the following new technique
for proving resource-bounded measure results. If C is a class that is closed
under - P;lin
pos\Gammatt -reductions, then in order to prove that - p
to prove that -
conveniently chosen strongly positive P-
sequence ~
fi of biases. (The Bias Equivalence Theorem has already been put
to this use in the forthcoming paper [17].)
The plausibility and consequences of the hypothesis - p (NP) 6= 0 are
subjects of recent and ongoing research [18, 15, 7, 16, 11, 3, 17]. The Bias
Equivalence Theorem immediately implies that the following three statements
are equivalent.
(H2) For every strongly positive P-sequence ~
fi of biases, -
There exists a strongly positive P-sequence ~
fi of biases such that
~
The statements (H2) and (H3) are thus new, equivalent formulations of the
hypothesis (H1).
The proof of the Bias Equivalence Theorem uses three main tools. The
first is the Summable Equivalence Theorem, which we have already dis-
cussed. The second is the Martingale Dilation Theorem, which is proven
in section 6. This result concerns martingales (defined in section 3), which
are the betting algorithms on which resource-bounded measure is based.
Roughly speaking, the Martingale Dilation Theorem gives a method of transforming
("dilating") a martingale for one coin-toss probability measure into
a martingale for another, perhaps very different, coin-toss probability mea-
sure, provided that the former measure is obtained from the latter via an
"orderly" truth-table reduction.
The third tool used in the proof of our main theorem is the Positive Bias
Reduction Theorem, which is presented in section 7. If ~
ff and ~
are two
strongly positive sequences of biases that are exactly P-computable (with
no approximation), then the positive bias reduction of ~ ff to ~
fi is a truth-table
reduction (in fact, an orderly - P;lin
pos\Gammatt -reduction) that uses the sequence ~
to "approximately simulate" the sequence ~ ff. It is especially crucial for
our main result that this reduction is efficient and positive. (The circuits
constructed by the truth-table reduction contain AND gates and OR gates,
but no NOT gates.)
The Summable Equivalence Theorem, the Martingale Dilation Theorem,
and the Positive Bias Reduction Theorem are only developed and used here
as tools to prove our main result. Nevertheless, these three results are of
independent interest, and are likely to be useful in future investigations.
Preliminaries
In this paper, N denotes the set of all nonnegative integers, Zdenotes the
set of all integers, Z + denotes the set of all positive integers, Q denotes the
set of all rational numbers, and R denotes the set of all real numbers.
We write f0; 1g for the set of all (finite, binary) strings, and we write
jxj for the length of a string x. The empty string, -, is the unique string of
length 0. The standard enumeration of f0; 1g is the sequence s
first by length and then lexicographically. For
precedes y in this standard enumeration.
For denotes the set of all strings of length n, and f0; 1g -n
denotes the set of all strings of length at most n.
If x is a string or an (infinite, binary) sequence, and if
then x[i::j] is the string consisting of the i th through j th bits of x. In
is the i-bit prefix of x. We write x[i] for x[i::i], the i th
bit of x. (Note that the leftmost bit of x is x[0], the 0 th bit of x.)
If w is a string and x is a string or sequence, then we write w v x if w
is a prefix of x, i.e., if there is a string or sequence y such that
The Boolean value of a condition OE is
In this paper we use both the binary logarithm log and the
natural logarithm
Many of the functions in this paper are real-valued functions on discrete
domains. These typically have the form
we interpret this to mean that f : f0; 1g \Gamma! R.)
Such a function f is defined to be p-computable if there is a function
with the following two properties.
(i) For all
(ii) There is an algorithm that, on input computes the
value -
Similarly, f is defined to be p 2 -computable if there is a function -
f as in (2.2)
that satisfies condition (i) above and the following condition.
There is an algorithm that, on input computes the
value -
time.
In this paper, functions of the form (2.1) always have the form
or the form
If such a function is p-computable or p 2 -computable, then we assume without
loss of generality that the approximating function -
f of (2.2) actually has
the form
or the form
respectively.
3 Resource-Bounded -Measure
In this section, we develop basic elements of resource-bounded measure based
on an arbitrary (Borel) probability measure -. The ideas here generalize the
corresponding ideas of "ordinary" resource-bounded measure (based on the
uniform probability measure -) in a straightforward and natural way, so
our presentation is relatively brief. The reader is referred to [12, 13] for
additional discussion.
We work in the Cantor space C, consisting of all languages A ' f0; 1g .
We identify each language A with its characteristic sequence, which is the
infinite binary sequence -A defined by
-A
for each n 2 N. Relying on this identification, we also consider C to be the
set of all infinite binary sequences.
For each string w 2 f0; 1g , the cylinder generated by w is the set
Note that C
We first review the well-known notion of a (Borel) probability measure
on C.
probability measure on C is a function
such that
Intuitively, -(w) is the probability that A 2 Cw when we "choose a
language A 2 C according to the probability measure -." We sometimes
Examples.
1. The uniform probability measure - is defined by
for all w 2 f0; 1g .
2. A sequence of biases is a sequence ~
Given a sequence of biases ~ fi, the ~ fi-coin-toss probability
measure (also called the ~ fi-product probability measure) is the probability
measure - ~
fi defined by
for all w 2 f0; 1g .
3. If
fi . In this case, we
have the simpler formula
where #(b; w) denotes the number of b's in w. Note that - 1
Intuitively, - ~ fi (w) is the probability that w v A when the language
A ' f0; 1g is chosen probabilistically according to the following random
experiment. For each string s i in the standard enumeration s
of f0; 1g , we (independently of all other strings) toss a special coin, whose
probability is fi i of coming up heads, in which case s i 2 A, and 1 \Gamma fi i of
coming up tails, in which case s i 62 A.
Definition. A probability measure - on C is positive if, for all w 2 f0; 1g ,
If - is a positive probability measure and u; v 2 f0; 1g , then
the conditional -measure of u given v is
Note that -(ujv) is the conditional probability that A 2 C u , given that
chosen according to the probability measure -.
Most of this paper concerns the following special type of probability
measure.
Definition. A probability measure - on C is strongly positive if (- is positive
and) there is a constant ffi ? 0 such that, for all w 2 f0; 1g and b 2 f0; 1g,
A sequence of biases ~
strongly positive if
there is a constant
If ~ fi is a sequence of biases, then the following two observations are clear.
1. - ~
fi is positive if and only if fi
2. If - ~ fi is positive, then for each w 2 f0; 1g ,
and
It follows immediately from these two things that the probability measure
- ~ fi is strongly positive if and only if the sequence of biases ~ fi is strongly
positive.
In this paper, we are primarily interested in strongly positive probability
measures - that are p-computable in the sense defined in section 2.
We next review the well-known notion of a martingale over a probability
measure -. Computable martingales were used by Schnorr [23, 24, 25, 26]
in his investigations of randomness, and have more recently been used by
Lutz [12] in the development of resource-bounded measure.
Let - be a probability measure on C. Then a -martingale is a
If ~
fi is a sequence of biases, then a - ~
fi -martingale is simply called a ~
fi-
martingale. A -martingale is even more simply called a martingale. (That
is, when the probability measure is not specified, it is assumed to be the
uniform probability measure -.)
Intuitively, a -martingale d is a "strategy for betting" on the successive
bits of (the characteristic sequence of) a language A 2 C. The real number
-) is regarded as the amount of money that the strategy starts with. The
real number -(w) is the amount of money that the strategy has after betting
on a prefix w of -A . The identity (3.1) ensures that the betting is "fair"
in the sense that, if A is chosen according to the probability measure -,
then the expected amount of money is constant as the betting proceeds.
(See [23, 24, 25, 26, 27, 12, 14, 13] for further discussion.) Of course, the
"objective" of a strategy is to win a lot of money.
Definition. A -martingale d succeeds on a language A 2 C if
lim sup
The success set of a -martingale d is the set
succeeds on Ag :
We are especially interested in martingales that are computable within
some resource bound. (Recall that the p-computability and p 2 -computability
of real valued functions were defined in section 2.)
Let - be a probability measure on C.
1. A p-martingale is a -martingale that is p-computable.
2. A p 2 -martingale is a -martingale that is p 2 -computable.
A p- ~
fi -martingale is called a p- ~
fi-martingale, a p-martingale is called
a p-martingale, and similarly for p 2 .
We now come to the fundamental ideas of resource-bounded -measure.
Let - be a probability measure on C, and let X ' C.
1. X has p-measure 0, and we write - there is a p-
martingale d such that X ' S 1 [d].
2. X has p-measure 1, and we write -
The conditions - p 2
are defined analogously.
Let - be a probability measure on C, and let X ' C.
1. X has -measure 0 in E, and we write -(X
2. X has -measure 1 in E, and we write -(X
3. X has -measure 0 in E 2 , and we write -(X
4. X has -measure 1 in E 2 , and we write -(X
Just as in the uniform case [12], the resource bounds p and p 2 of the
above definitions are only two possible values of a very general parameter.
Other choices of this parameter yield classical -measure [5], constructive
-measure (as used in algorithmic information theory [29, 27]), -measure in
the set REC, consisting of all decidable languages, -measure in ESPACE,
etc.
The rest of this section is devoted to a very brief presentation of some
of the fundamental theorems of resource-bounded -measure. One of the
main objectives of these results is to justify the intuition that a set with
-measure 0 in E contains only a "negligibly small" part of E (with respect
to -). For the purpose of this paper, it suffices to present these results for p-
-measure and -measure in E. We note, however, that all these results hold
a fortiori for p 2 -measure, rec-measure, classical -measure, -measure
in -measure in ESPACE, etc.
We first note that -measure 0 sets exhibit the set-theoretic behavior of
small sets.
1. X is a p-union of the p-measure 0 sets
and there is a sequence d -martingales with the following
two properties.
(i) For each k 2 N, X k ' S 1 [d k ].
(ii) The function (k; w) 7! d k (w) is p-computable.
2. X is a p-union of the sets X
and there is a sequence d 0 ; d 1
with the following two properties.
(i) For each k 2 N,
(ii) The function (k; w) 7! d k (w) is p-computable.
Lemma 3.1. Let - be a probability measure on C, and let I be either the
collection of all p-measure 0 subsets of C, or the collection of all subsets
of C that have -measure 0 in E. Then I has the following three closure
properties.
1. If X ' Y 2 I, then X 2 I.
2. If X is a finite union of elements of I, then X 2 I
3. If X is a p-union of elements of I, then X 2 I.
Proof (sketch). Assume that X is a p-union of the p-measure 0 sets
be as in the definition of this condition.
Without loss of generality, assume that d k (-) ? 0 for each k 2 N. It suffices
to show that - remaining parts of the lemma are obvious or
follow directly from this.) Define
Its is easily checked that d is a p-martingale and that X ' S 1 [d], so
We next note that, if - is strongly positive and p-computable, then every
singleton subset of E has p-measure 0.
Lemma 3.2. If - is a strongly positive, p-computable probability measure
on C, then for every A 2 E,
Proof (sketch). Assume the hypothesis, and fix ffi ? 0 such that, for all
It is easily checked that d is a p-martingale and that, for all n 2 N,
\Gamman , whence A 2 S 1 [d]. .
Note that, for A 2 E, the "point-mass" probability measure
-A
is p-computable, and fAg does not have p- A -measure 0. Thus, the strong
positivity hypothesis cannot be removed from Lemma 3.2.
We now come to the most crucial issue in the development of resource-bounded
measure. If a set X has -measure 0 in E, then we want to say
that X contains only a "negligible small" part of E. In particular, then, it
is critical that E itself not have -measure 0 in E. The following theorem
establishes this and more.
Theorem 3.3. Let - be a probability measure on C, and let w 2 f0; 1g .
If -(w) ? 0, then Cw does not have -measure 0 in E.
Proof (sketch). Assume the hypothesis, and let d be a p-martingale. It
suffices to show that
Since d is p-computable, there is a function -
with the following two properties.
(i) For all r 2 N and w 2 f0; 1g , j -
(ii) There is an algorithm that computes -
w) in time polynomial in
Define a language A recursively as follows. First, for
w[i]. Next assume that the string x
With the language A so defined, it is easy to check that A 2 Cw " E. It
is also routine to check that, for all i - jwj,
It follows inductively that, for all n - jwj,
This implies that
lim sup
whence A 62 S 1 [d].
As in the case of the uniform probability measure [12], more quantitative
results on resource-bounded -measure can be obtained by considering the
unitary success set
Cw
and the initial value d(-) of a p-martingale d. For example, generalizing
the arguments in [12] in a straightforward manner, this approach yields a
Measure Conservation Theorem for -measure (a quantitative extension of
Theorem 3.3 ) and a uniform, resource-bounded extension of the classical
first Borel-Cantelli lemma. As these results are not used in the present
paper, we refrain from elaborating here.
4 Summable Equivalence
If two probability measures on C are sufficiently "close" to one another, then
the Summable Equivalence Theorem says that the two probability measures
are in absolute agreement as to which sets of languages have p-measure 0
and which do not. In this section, we define this notion of "close" and prove
this result.
Let - be a positive probability measure on C, let A ' f0; 1g ,
and let i 2 N. Then the i th conditional -probability along A is
Two positive probability measures - and - 0 on C are summably
equivalent, and we write - t - 0 , if for every A ' f0; 1g ,X
It is clear that summable equivalence is an equivalence relation on the
collection of all positive probability measures on C. The following fact is
also easily verified.
Lemma 4.1. Let - and - 0 be positive probability measures on C. If - t - 0 ,
then - is strongly positive if and only if - 0 is strongly positive.
The following definition gives the most obvious way to transform a martingale
for one probability measure into a martingale for another.
probability measures on C with - 0 positive,
and let d be a -martingale. Then the canonical adjustment of d to - 0 is the
defined by
for all w 2 f0; 1g .
It is trivial to check that the above function d 0 is indeed a - 0 -martingale.
The following lemma shows that, for strongly positive probability measures,
summable equivalence is a sufficient condition for d 0 to succeed whenever d
succeeds.
Lemma 4.2. Let - and - 0 be strongly positive probability measures on C,
let d be a -martingale, and let d 0 be the canonical adjustment of d to - 0 . If
Proof. Assume the hypothesis, and let A 2 S 1 [d]. For each i 2 N, let
The hypothesis - t - 0 says that
In particular, this implies
that - i \Gamma! 0 as i \Gamma! 1, so we have the Taylor approximation
as i \Gamma! 1. Thus j
j is asymptotically equivalent to
as i \Gamma! 1.
strongly positive, it follows that
1. Thus, if we
there is a positive constant c such that, for all
whence
Since A 2 S 1 [d], we thus have
lim sup
so A 2 S 1 [d 0 ].
The following useful result is now easily established.
Theorem 4.3 (Summable Equivalence Theorem). If - and - 0 are strongly
positive, p-computable probability measures on C such that - t - 0 , then
for every set X ' C,
Proof. Assume the hypothesis, and assume that -
it suffices to show that - 0
there is a p-computable
-martingale d such that X ' S 1 [d]. Let d 0 be the canonical adjustment of
d to - 0 . Since d; -; and - 0 are all p-computable, it is easy to see that d 0 is
p-computable. us that
5 Exact Computation
It is sometimes useful or convenient to work with probability measures that
are rational-valued and efficiently computable in an exact sense, with no
approximation. This section presents two very easy results identifying situations
in which such probability measures are available.
Definition. A probability measure - on C is exactly p-computable if
and there is an algorithm that computes -(w) in time
polynomial in jwj.
Lemma 5.1. For every strongly positive, p-computable probability measure
on C, there is an exactly p-computable probability measure - 0 on C such
that - t - 0 .
Proof. Let - be a p-computable probability measure on C, and fix a function
that testifies to the p-computability of -. Since
- is strongly positive, there is a constant c 2 N such that, for all w 2 f0; 1g ,
Fix such a c and, for all w 2 f0; 1g , define
ae
oe
It is clear that - 0 is an exactly p-computable probability measure on C.
Now let w 2 f0; 1g and b 2 f0; 1g. For convenience, let
Note that
It is clear by inspection that - 0 (wbjw) can be written in the form
where
We thus have
whence
a 1
a 2
a 2 a 0- 2fflffi \Gamma2
For all A ' f0; 1g , then, we haveX
For some purposes (including those of this paper), the requirement of
p-computability is too weak, because it allows -(w) to be computed (or
approximated) in time polynomial in jwj, which is exponential in the length
of the last string decided by w when we regard w as a prefix of a language A.
In such situations, the following sort of requirement is often more useful. (We
only give the definitions for sequences of biases, i.e., coin-toss probability
measures, because this suffices for our purposes in this paper. It is clearly a
routine matter to generalize further.)
1. A P-sequence of biases is a sequence ~
for which there is a function
with the following two properties.
(i) For all
(ii) There is an algorithm that, for all
fi(i; r) in
time polynomial in js i j +r (i.e., in time polynomial in log(i+1)+
r).
2. A P-exact sequence of biases is a sequence ~
such that the function i 7\Gamma! fi i is computable
in time polynomial in js i j.
Definition. If ~ ff and ~ fi are sequences of biases, then ~
ff and ~
are summably
equivalent, and we write ~ ff t ~ fi, if
It is clear that ~ ff t ~
fi if and only if - ~ ff t - ~
fi .
Lemma 5.2. For every P-sequence of biases ~
fi, there is a P-exact sequence
of biases ~
fi 0 such that ~
Proof. Let ~ fi be a strongly positive P-sequence of biases, and let -
be a function that testifies to this fact. For each i 2 N, let
and let ~
fi 0 is a P-exact sequence of biases, andX
so ~ fi t ~
6 Martingale Dilation
In this section we show that certain truth-table reductions can be used to
dilate martingales for one probability measure into martingales for another,
perhaps dissimilar, probability measure on C. We first present some terminology
and notation on truth-table reductions. (Most of this notation is
standard [22], but some is specialized to our purposes.)
A truth-table reduction (briefly, a - tt -reduction) is an ordered pair (f; g)
of total recursive functions such that for each x 2 f0; 1g , there exists n(x) 2
such that the following two conditions hold.
(i) f(x) is (the standard encoding of) an n(x)-tuple (f 1
of strings f i (x) 2 f0; 1g , which are called the queries of the reduction
(f; g) on input x. We use the notation Q (f;g)
for the set of such queries.
(ii) g(x) is (the standard encoding of) an n(x)-input, 1-output Boolean
circuit, called the truth table of the reduction (f; g) on input x. We
identify g(x) with the Boolean function computed by this circuit, i.e.,
A truth-table reduction (f; g) induces the function
If A and B are languages and (f; g) is a - tt -reduction, then (f; g) reduces
B to A, and we write
(A). More generally, if A and B are languages, then B is truth-table
reducible (briefly, - tt -reducible) to A, and we write B - tt A, if there
exists a - tt -reduction (f; g) such that B - tt A via (f; g).
If (f; g) is a - tt -reduction, then the function F (f;g) : C \Gamma! C defined
above induces a corresponding function
defined as follows. (It is standard practice to use the same notation for
these two functions, and no confusion will result from this practice here.)
Intuitively, if A 2 C and w v A, then F (f;g) (w) is the largest prefix of
(A) such that w answers all queries in this prefix. Formally, let w 2
If Q (f;g) (x) ' fs
Otherwise,
where m is the greatest nonnegative integer such that
Now let (f; g) be a - tt -reduction, and let z 2 f0; 1g . Then the inverse
image of the cylinder C z under the reduction (f; g) is
We can write this set in the form
w2I
where I is the set of all strings w 2 f0; 1g with the following properties.
(i) z v F (f;g) (w).
(ii) If w 0 is a proper prefix of w, then z 6v F (f;g) (w 0 ).
Moreover, the cylinders Cw in this union are disjoint, so if - is a probability
measure on C, then
w2I
The following well-known fact is easily verified.
Lemma 6.1. If - is a probability measure on C and (f; g) is a - tt -reduction,
then the function
is also a probability measure on C.
The probability measure - (f;g) of Lemma 6.1 is called the probability
measure induced by - and (f; g).
In this paper, we only use the following special type of - tt -reduction.
-reduction (f; g) is orderly if, for all x;
y, v. That is, if x precedes y
(in the standard ordering of f0; 1g ), then every query of (f; g) on input x
precedes every query of (f; g) on input y.
The following is an obvious property of orderly - tt -reductions.
Lemma 6.2. If - is a coin-toss probability measure on C and (f; g) is an
orderly - tt -reduction, then - (f;g) is also a coin-toss probability measure on
C.
Note that, if (f; g) is an orderly - tt -reduction, then F (f;g) (w) 2 f0; 1g
for all w 2 f0; 1g . Note also that the length of F (f;g) (w) depends only
upon the length of w (i.e., implies that jF (f;g)
Finally, note that for each m 2 N there exists l 2 N such that jF (f;g) (0 l
m.
Definition. Let (f; g) be an orderly - tt -reduction.
1. An (f; g)-step is a positive integer l such that F (f;g) (0
2. For k 2 N, we let step(k) be the least (f; g)-step l such that l - k.
The following construction is crucial to the proof of our main theorem.
Let - be a positive probability measure on C, let (f; g) be an
orderly - tt -reduction, and let d be a - (f;g) -martingale. Then the (f; g)-
dilation of d is the function
u2f0;1g l\Gammak
In other words, (f; g)bd(w) is the conditional -expected value of d(F (f;g) (w 0 )),
given that w v w 0 and jw step(jwj). We do not include the probability
measure - in the notation (f; g)bd because - (being positive) is implicit in
d.
Intuitively, the function (f; g)bd is a strategy for betting on a language
A, assuming that d itself is a strategy for betting on the language F (f;g) (A).
The following theorem makes this intuition precise.
Theorem 6.3 (Martingale Dilation Theorem). Assume that - is a positive
coin-toss probability measure on C, (f; g) is an orderly - tt -reduction, and d
is a - (f;g) -martingale. Then (f; g)bd is a -martingale. Moreover, for every
language A ' f0; 1g , if d succeeds on F (f;g) (A), then (f; g)bd succeeds on
A.
A very special case of the above result (for strictly increasing - P
-reductions
under the uniform probability measure) was developed by Ambos-Spies, Ter-
wijn, and Zheng [2], and made explicit by Juedes and Lutz [8]. Our use of
martingale dilation in the present paper is very different from the simple
padding arguments of [2, 8].
The following two technical lemmas are used in the proof of Theorem
6.3.
Lemma 6.4. Assume that - is a positive coin-toss probability measure on
C and (f; g) is an orderly - tt -reduction. Let
and assume that is an (f; g)-step. Let l 1). Then, for
Proof. Assume the hypothesis. Then
since - is a coin-toss probability measure, we have -(w 0 ujw
for each w 0 2 f0; 1g k such that F (w 0 (w). Also, since (f; g) is orderly,
the conditions F (w 0 are equivalent for each
Lemma 6.5. Assume that - is a positive coin-toss probability measure on
C and (f; g) is an orderly - tt -reduction. Let assume that d
is a - (f;g) -martingale. Let w 2 f0; 1g , assume that is an (f; g)-step,
and let l 1). Then
u2f0;1g l\Gammak
Proof. Assume the hypothesis. Since d is a - (f;g) -martingale and - (f;g)
is positive, we have
It follows by Lemma 6.4 that
u2f0;1g l\Gammak
Proof of Theorem 6.3. Assume the hypothesis, and let
To see that (f; g)bd is a -martingale, let w 2 f0; 1g , let
1). We have two cases.
Case I.
u2f0;1g
u2f0;1g
u2f0;1g l\Gammak
Case II. is an (f; g)-step, so (f;
whence by Lemma 6.5
u2f0;1g l\Gammak
Calculating as in Case I, it follows that
This completes the proof that (f; g)bd is a -martingale.
To complete the proof, let A ' f0; 1g , and assume that d succeeds
on F (A). For each n 2 N, let w is the unique
(f; g)-step such that jF(0 l n
so
lim sup
Thus (f; g)bd succeeds on A.
7 Positive Bias Reduction
In this section, we define and analyze a positive truth-table reduction that
encodes an efficient, approximate simulation of one sequence of biases by
another.
Intuitively, if ~
ff and ~
are strongly positive sequences of biases, then
the positive bias reduction of ~ ff to ~
fi is a - tt -reduction (f; g) that "tries
to simulate" the sequence ~ ff with the sequence ~
fi by causing - ~ ff to be the
probability distribution induced by - ~ fi and (f; g). In general, this objective
will only be approximately achieved, in the sense that the probability distribution
induced by - ~
fi and (f; g) will actually be a probability distribution
, where ~ ff 0 is a sequence of biases such that ~ ff 0 t ~ ff. This situation is
depicted schematically in Figure 1, where the broken arrow indicates that
(f; g) "tries" to reduce ~ ff to ~
fi, while the solid arrow indicates that (f; g)
actually reduces ~ ff 0 to ~ fi.
Figure
1: Schematic depiction of positive bias reduction
The reduction (f; g) is constructed precisely as follows.
Construction 7.1 (Positive Bias Reduction). Let ~ ff and ~
fi be strongly
positive sequences of biases. Let
e:
For each x 2 f0; 1g and 0 - xy, where y is the
th element of f0; 1g cjxj , and let j(x; n) be the index of the string q(x; n),
begin
while ff 0
do
begin
l := 0;
while ff 0
do
begin
l
end .
Figure
2: Construction of positive bias reduction
n). Then the positive bias reduction of ~
ff to ~
fi is the
ordered pair (f; g) of functions defined by the procedure in Figure 2. (For
convenience, the procedure defines additional parameters that are useful in
the subsequent analysis.)
The following general remarks will be helpful in understanding Construction
7.1.
(a) The boldface variables v 0 Boolean inputs to the Boolean
function g(x) being constructed. The Boolean function g(x) is an OR
of k(x) Boolean functions h(x; k), i.e.,
The Boolean functions g(x; are preliminary approximations
of the Boolean function g(x). In particular,
for all 0 - k - k(x). Thus g(x; 0) is the constant-0 Boolean function.
(b) The Boolean function h(x; k) is an AND of l(x; consecutive input
variables. The subscript n is incremented globally so that no input
variable appears more than once in g(x). Just as g(x; k) is the k th
"partial OR" of g(x), h(x; k; l) is the l th "partial AND" of h(x; k).
Thus h(x; k; 0) is the constant-1 Boolean function.
(c) The input variables v 0 , correspond to the respective queries
then we have
chosen
according to the sequence of biases ~
fi, then fi j(x;n) is the probability
that is the probability that h(x;
i is the
probability that 1. The while-loops ensure that ff
The following lemmas provide some quantitative analysis of the behavior
of Construction 7.1.
Lemma 7.2. In Construction 7.1, for all x 2 f0; 1g and 0 - k - k(x),
log e
Proof. Fix such x and k, and let l the result is trivial,
so assume that l ? 0. Then, by the minimality of l ,
so
so
It follows that
whence
l
Lemma 7.3. In the Construction 7.1, for all x 2 f0; 1g , and 0 - k -
Proof. Fix such x and k with
The lemma now follows immediately by induction.
Lemma 7.4. In Construction 7.1, for all x 2 f0; 1g ,
log e
Proof. Fix x 2 f0; 1g . By Lemma 7.3 and the minimality of k(x),
so
so
log e
Lemma 7.5. In Construction 7.1, for all x 2 f0; 1g ,
Proof. Let x 2 f0; 1g . Then
so by Lemmas 7.2, 7.4, and the bound 1
log e
cjxj
log
Definition. Let (f; g) be a - tt -reduction.
1. (f; g) is positive (briefly, a - pos\Gammatt -reduction) if, for all A; B ' f0; 1g ,
2. (f; g) is polynomial-time computable (briefly, a - P
tt -reduction) if the
functions f and g are computable in polynomial time.
3. (f; g) is polynomial-time computable with linear-bounded queries (briefly,
a - P;lin
tt -reduction and there is a constant
c 2 N such that, for all x 2 f0; 1g , Q (f;g) (x) ' f0; 1g -c(1+jxj) .
Of course, a - P;lin
pos\Gammatt -reduction is a - tt -reduction with all the above properties
The following result presents the properties of the positive bias reduction
that are used in the proof of our main theorem.
Theorem 7.6 (Positive Bias Reduction Theorem). Let ~ ff and ~
fi be strongly
positive, P-exact sequences of biases, and let (f; g) be the positive bias reduction
of ~ ff to ~
fi. Then (f; g) is an orderly - P;lin
pos\Gammatt -reduction, and the
probability measure induced by - ~
fi and (f; g) is a coin-toss probability measure
Proof. Assume the hypothesis. By inspection and Lemma 7.5, the pair
(f; g) is an orderly - P;lin
pos\Gammatt -reduction. (Lemma 7.5 also ensures that f(x) is
well-defined.) The reduction is also positive, since only AND's and OR's are
used in the construction of g(x). Thus (f; g) is an orderly - P;lin
pos\Gammatt -reduction.
By remark (c) following Construction 7.1, the probability measure induced
by - ~ fi and (f; g) is the coin-toss probability measure - ~
, where
~
defined in the construction. Moreover,X
so ~ ff t ~
8 Equivalence for Complexity Classes
Many important complexity classes, including P, NP, co-NP, R, BPP, AM,
P/Poly, PH, PSPACE, etc., are known to be closed under - P
pos\Gammatt -reductions,
hence certainly under - P;lin
pos\Gammatt -reductions. The following theorem, which is
the main result of this paper, says that the p-measure of such a class is somewhat
insensitive to certain changes in the underlying probability measure.
The proof is now easy, given the machinery of the preceding sections.
Theorem 8.1 (Bias Equivalence Theorem). Assume that ~ ff and ~
are
strongly positive P-sequences of biases, and let C be a class of languages
that is closed under - P;lin
pos\Gammatt -reductions. Then
ff
Proof. Assume the hypothesis, and assume that - ~ ff
it suffices to show that - ~ fi
The proof follows the scheme depicted in Figure 3. By Lemma 5.2,
there exist P-exact sequences ~
ff 0 and ~
fi 0 such that ~ ff t ~
ff 0 and ~ fi t ~
(f; g) be the positive bias reduction of ~
ff 0 to ~
Then, by the Positive Bias
Reduction Theorem (Theorem 7.6), (f; g) is an orderly - P;lin
pos\Gammatt -reduction,
and the probability measure induced by - ~
fi and (f; g) is - ~
Figure
3: Scheme of proof of Bias Equivalence Theorem
ff 00 and - ~ ff
the Summable Equivalence Theorem
(Theorem 4.3) tells us that there is a p-~ff 00 -martingale d such that C ' S 1 [d].
By the Martingale Dilation Theorem (Theorem 6.3), the function (f; g)bd
is then a ~
In fact, it easily checked that (f; g)bd is a p- ~ fi 0 -
martingale.
Now let A 2 C. Then, since C is closed under - P;lin
pos\Gammatt -reductions,
It follows by the Martingale Dilation Theorem
that A 2 S 1 [(f; g)bd]. Thus C ' S 1 [(f; g)bd]. Since (f; g)bd is a p- ~ fi 0 -
martingale, this shows that -
Finally, since ~
the Summable Equivalence Theorem that -
It is clear that the Bias Equivalence Theorem remains true if the resource
bound on the measure is relaxed. That is, the analogs of Theorem 8.1 for
measure, pspace-measure, rec-measure, constructive measure, and classical
measure all immediately follow. We conclude by noting that the analogs of
Theorem 8.1 for measure in E and measure in E 2 also immediately follow.
Corollary 8.2. Under the hypothesis of Theorem 8.1,
and
Proof. If C is closed under - P;lin
pos\Gammatt -reductions, then so are the classes
and
9 Conclusion
Our main result, the Bias Equivalence Theorem, says that every strongly
positive, P-computable, coin-toss probability measure - is equivalent to the
uniform probability measure -, in the sense that
for all classes C 2 \Gamma, where \Gamma is a family that contains P, NP, co-NP, R, BPP,
P/Poly, PH and many other classes of interest. It would be illuminating to
learn more about which probability measures are, and which probability
measures are not, equivalent to - in this sense.
It would also be of interest to know whether the Summable Equivalence
Theorem can be strengthened. Specifically, say that two sequences of biases
~ ff and ~ fi are square-summably equivalent, and write ~ ff t 2 ~
fi, if
classical theorem of Kakutani [9] says that, if ~ ff and ~
are
strongly positive sequences of biases such that ~ ff t 2 ~ fi, then for every set
has (classical) ~ ff-measure 0 if and only if X has ~
fi-measure 0. A
constructive improvement of this theorem by Vovk [28] says that, if ~ ff and
~
are strongly positive, computable sequences of biases such that ~ ff t 2 ~
fi,
then for every set X ' C, X has constructive ~ ff-measure 0 if and only if
X has constructive ~
fi-measure 0. (The Kakutani and Vovk theorems are
more general than this, but for the sake of brevity, we restrict the present
discussion to coin-toss probability measures.) The Summable Equivalence
Theorem is stronger than these results in one sense, but weaker in another.
It is stronger in that it holds for p-measure, but it is weaker in that it
requires the stronger hypothesis that ~ ff t ~ fi. We thus ask whether there is
a "square-summable equivalence theorem" for p-measure. That is, if ~ ff and
~
are strongly positive, p-computable sequences of biases such that ~ ff t 2 ~
fi,
is it necessarily the case that, for every set X ' C, X has p-~ff-measure 0
if and only if X has p- ~
fi-measure 0? (Note: Kautz [10] has very recently
answered this question affirmatively.)
Acknowledgments
. We thank Giora Slutzki, Martin Strauss, and
other participants in the ISU Information and Complexity Seminar for useful
remarks and suggestions. We especially thank Giora Slutzki for suggesting
a simplified presentation of Lemma 4.2.
--R
A comparison of weak completeness notions.
Resource bounded randomness and weakly complete problems.
Fine separation of average time complexity classes.
Families of recursive predicates of measure zero.
Measure Theory.
Weakly complete problems are not rare.
The complexity and distribution of hard problems.
Weak completeness in E and
On the equivalence of infinite product measures.
Personal communication
Observations on measure and lowness for
Almost everywhere high nonuniform complexity.
The quantitative structure of exponential time.
Weakly hard problems.
Cook versus Karp-Levin: Separating completeness notions if NP is not small
Almost every set in exponential time is P-bi-immune
"almost all"
Pseudorandom generators
Klassifikation der Zufallsgesetze nach Komplexit-at und Ordnung
A unified approach to the definition of random sequences.
Process complexity and effective random tests.
Random Sequences.
On a randomness criterion.
The complexity of finite objects and the development of the concepts of information and randomness by means of the theory of algorithms.
--TR | complexity classes;martingales;resource-bounded measure |
330384 | Unlinkable serial transactions. | We present a protocol for unlinkable serial transactions suitable for a variety of network-based subscription services. It is the first protocol to use cryptographic blinding to enable subscription services. The protocol prevents the service from tracking the behavior of its customers, while protecting the service vendor from abuse due to simultaneous or cloned use by a single subscriber. Our basic protocol structure and recovery protocol are robust against failure in protocol termination. We evaluate the security of the basic protocol and extend the basic protocol to include auditing, which further deters subscription sharing. We describe other applications of unlinkable serial transactions for pay-per-use trans subscription, third-party subscription management, multivendor coupons, proof of group membership, and voting. | Introduction
This paper is motivated by an apparent conflict of interest concerning the privacy
of information in an electronic exchange. Commercial service providers would
like to be sure that they are paid for their services and protected from abuse
due to simultaneous or "cloned" usage from a single subscription. To this end
they have an interest in keeping a close eye on customer behavior. On the other
hand customers have an interest in the privacy of their personal information, in
particular the privacy of profiles of their commercial activity. One well known
approach to this problem is to allow a customer to register with vendors under
pseudonyms, one for each vendor [4]. By conducting transactions using anonymous
electronic cash (e-cash) the customer's anonymity is maintained. But, the
vendor is able to protect his interests by maintaining a profile on each of his
anonymous customers.
In this paper we present effectively the opposite solution to this problem.
The customer may be known to the vendor, but his behavior is untraceable.
This would appear infeasible. If transactions cannot be linked to the customer,
what is to keep him from abusing the service? For example, if someone fails to
return a rented video, the video rental company would like at minimum to be
sure that this person cannot rent any more videos. But, the company cannot do
this if they cannot determine who the renter is. 1 We will present a protocol that
makes transactions unlinkable but protects vendors from such abuses.
For the near future at least, a large part of the market on the Internet and
in other electronic venues will rely on credit card based models (such as SET
? Center for High Assurance Computer Systems, Code 5543, Naval Research Labora-
tory, Washington DC 20375, USA. flastnameg@itd.nrl.navy.mil
?? AT&T Labs-Research, Rm 2A-345A, 600 Mountain Ave., Murray Hill NJ 07974,
USA. stubblebine@research.att.com
1 In a pseudonym based scheme, such a customer could try to open an account under
a new pseudonym, but there are mechanisms to make this difficult [5]. Thus, the
interests of the vendor can be protected.
[16] or Cybercash [7] or simply sending credit card numbers over SSL). Applications
of our protocol that require payment are not dependent on the payment
mechanisms used. Thus, our protocol can be easily applied now but is equally
amenable to use with e-cash. Even in an environment in which pseudonyms and
anonymous e-cash are generally available, vendor profiles of customers (or their
pseudonyms) might be undesirable because the customer's anonymity protection
has a single point of failure. If the vendor is ever able to link a pseudonym to
a customer, the entire profile immediately becomes linked to that customer. In
our solution, if a customer is ever linked to a transaction, only his link to the one
transaction is revealed. (This is somewhat analogous to the property of perfect
forward secrecy in key establishment protocols.)
On what applications could our approach be used? Consider a subscription
service for an on-line newspaper or encyclopedia. Customers might have an interest
in keeping the searches they conduct private. At the same time, vendors
would like to make it difficult for customers to transfer their ability to access
the service. This will serve as our primary example.
We will also consider other applications. One example is pay-per-use service
within a subscription (e.g., Lexis-Nexis or pay-per-view movies available to cable
subscribers). Unlinkable serial transactions can also be used to provide
multivendor packages as well as ongoing discounts. And, they can be used for
anonymous proof of membership for applications having nothing directly to do
with electronic commerce. Applications include proof of age and proof of resi-
dency. They can also be used to construct a simple voter registration protocol.
The paper is organized as follows. In section 2 we describe related work.
Most of the basic mechanisms on which we rely come from work on e-cash;
although, we are able to simplify some of those mechanisms for our purposes. We
describe these and their relation to our work. We also rely on the assumption that
communicating parties will not be identified by the communications medium,
independent of the messages they send. Services that prevent this are discussed as
well. In section 3 we will describe the basic protocol including set up, usage, and
termination of a subscription. We also discuss recovery from broken connections.
In section 4 we describe various applications of unlinkable serial transactions and
associated protocol variants. In section 5 we present concluding remarks.
Related Work
2.1 Digital Cash
Digital cash, especially anonymous e-cash as presented by Chaum et al. [6], is
characterized by several requirements [13]: independent of physical requirements,
unforgeable and uncopyable, untraceable purchases, off-line, transferable, and
subdividable. No known e-cash system has all of these properties, and certain
properties, especially e-cash that can be divided into unlinkable change, tend to
be computationally expensive.
E-cash can either be on-line or off-line. In an on-line scheme, before completing
the transaction, the vendor can verify with a bank that the cash has not
previously been spent. In an off-line scheme, double spending must be detectable
later, and the identity of the double spender must then be revealed. Previously
agreed upon penalties can then be applied that make double spending not cost
effective.
Chaum's notion of blinding [5] is a fundamental technique used in anonymous
e-cash and assigning pseudonyms. A bank customer may want a certain amount
of e-cash from the bank, but may not trust the bank not to mark (and record)
the e-cash in some way. One solution is for the bank to sign something for the
customer that the bank cannot read, while the customer presents the bank with
evidence that the bank is signing something legitimate.
Chaum's blinding depends on the commutativity of modular multiplication
operations. Therefore, the customer can create an e-cash certificate, multiply
it by a random number called a blinding factor. If the bank signs the blinded
certificate, the customer can then divide out the blinding factor. The result is
the unblinded certificate signed by the bank. But the bank does not know what
it signed.
How can the customer assure the bank that the blinded certificate is legiti-
mate? In Chaum's scheme, the customer presents the bank with many blinded
certificates that differ in serial number, perhaps, but not in denomination. The
bank chooses the one it will sign and asks the customer for the blinding factors
of the others. If the randomly chosen certificates turn out to be legitimate when
unblinded, the bank can have confidence that the remaining blinded certificate
is legitimate too.
One on-line e-cash scheme is presented in [15]. To obtain an e-cash certificate
that only he can use, a customer presents the bank with a hash of a random
number. The bank signs an e-cash certificate linking that hash with a denomi-
nation. To use the e-cash, the customer reveals the random number to a vendor,
who in turn takes the e-cash to a bank. Since hashes are one-way functions, it
would be very hard for someone other than the customer to guess the secret that
allows the e-cash to be spent. After the money is spent, the bank must record the
hash to prevent it from being spent again. This scheme can be combined with
blinding, to hide the actual e-cash certificate from the bank during withdrawal.
One off-line e-cash scheme is presented in [11]. There, the bank signs blinded
certificates. To spend the e-cash, the customer must respond to a vendor's chal-
lenge. The response can be checked by inspecting the e-cash. Double spending is
prevented because the challenge/response scheme is constructed so the combination
of responses to two different challenges reveals the identity of the customer.
As long as the customer does not double spend, his identity is protected. Nobody
but the customer can generate responses, so the customer cannot be framed for
double spending.
It may be the case that truly anonymous unlinkable e-cash enables criminal
activity. Several key escrow or trustee-based systems [2] have been developed
that can reveal identities to authorities who obtain proper authorizations.
Our notion of unlinkable certificates came from asking the following ques-
tion: what else shares some of the features of digital cash? Unlinkable certificates
share many of these features: they must preserve the user's anonymity
and not be traceable, and they must protect the issuer and not be forgeable or
copyable. Unlike e-cash, however, transferability is not desirable. We use hashing
of random numbers and blinding in our development of unlinkable certificates.
Our unlinkable certificates differ from Chaum's pseudonyms [5] which are an
alternative to a universal identification system. Each pseudonym is supposed to
identify its owner to some institution and not be linkable across different insti-
tutions. Unlinkable serial certificates are designed to be unlinkable both across
institutions and across transactions within a single institution. In particular, we
want the vendor to be unable to link transactions to a single customer, even if
that customer had to identify himself initially (i.e., during the subscription pro-
cess). At the same time, the vendor needs to be able to protect himself against
customers that abuse his service.
Our blinding also differs from the usual approach. Typically some mechanism
is necessary to assure either the issuing bank or receiving vendor that the
certificate blindly signed by the issuer has the right form, i.e., that the customer
has not tricked the signer into signing something inappropriate. We described
Chaum's basic approach to doing this above. By moving relevant assurances to
other parts of the protocols, we are able to eliminate the need for such verifica-
tion. The result is a simplification of the blinding scheme.
2.2 Anonymity Services
How can a customer keep his private information private if communication channels
reveal identities? For example, vendors having toll-free numbers can subscribe
to services that reveal callers' phone numbers to the vendor thereby obviating
any pseudonym the customer may be using. A similar service in the
form of caller-id is now available to many private customers. If a communication
channel implicitly reveals identities, how can customer's private information be
protected?
The solution lies in separating identification from connections. The connection
should not reveal information. Identifying information should be carried
over the connection. (Of course, vendors and private parties are welcome to
close connections that do not immediately provide sufficient identifying infor-
mation.) On the Internet, depending upon one's environment and threat model,
several solutions exist.
For e-mail, anonymous remailers can be used to forward mail through a
service that promises not to reveal the sender's identity to the recipient. User's
worried about traffic analysis can use Babel [12] or other Mixmaster [8] based
remailers which forward messages through a series of Chaum mixes [4]. Each
mix can identify only the previous and next mix, and never (both) the sender
and recipient.
For Web browsing, the Anonymizer [1] provides a degree of protection. Web
connections made through the Anonymizer are anonymized. By looking at connection
information, packet headers, etc. the destination Web server can only
identify that the connection came from (through) the Anonymizer.
Onion routing [17] provides anonymizing services for a variety of Internet
services over connections that are resistant to traffic analysis. Like Babel, onion
routing can be used for e-mail. Onion routing can also be used to hide Web
browsing, remote logins, and file transfers. If the communicating parties have secure
connections to endpoint onion routers, communication can be anonymous
to both the network and observers, but the parties may reveal identifying information
to each other. The goal of onion routing is anonymous connections, not
anonymous communication. Other application independent systems that complicate
traffic analysis in networks have been designed or proposed. In [9] a
cryptographically layered structure similar to onions in onion routing is used to
forward individual IP packets through a network, essentially building a connection
for each packet in a connectionless service. In [14], mixes are used to make
an ISDN system that hides the individual within a local switch originating or
receiving a call.
3 Transaction Unlinkability
In this section we describe protocols that prevent linking of a client's transactions
to each other. Consequently, they also cannot be linked to the client himself. We
assume that the client has subscribed to a service with whom he will conduct
these transactions and has provided adequate identifying and billing information
(e.g., credit card numbers). The protocols make use of many basic e-cash primitives
but are generally simpler than protocols using these primitives in their
more common applications.
The basic protocol allows a customer to sign up for unlimited use of some
subscription service for a period of time but prevents the service from determining
when he has used the service or what he has accessed. At the same time,
mechanisms are provided that make it difficult for the customer to share his
subscription with others and leaves him vulnerable to detection and financial
loss if he should do so. First we set out the requirements that such protocols
should meet.
3.1 Requirements
Client Privacy Privacy of clients should be protected. Specifically, it should be
difficult for a vendor or others to link the client to any particular requested
transaction. It should also be difficult for the vendor to link any one transaction
request with any other. (Thus, building a profile that might ultimately
be tied to a client is difficult.)
Service Guarantee Clients should be assured that no one can steal from them
the service for which they contracted, i.e., that vendors cannot be tricked
into servicing invalid clients at their expense.
Fraud Prevention Vendors should be assured that they are not providing un-
contracted services. Specifically, there should be no more active transaction
requests for a service possible at any one time than the number of paid
subscriptions at that time.
3.2 Basic Unlinkable Serial Protocol
The basic protocol has two phases, registration and certificate redemption, optionally
followed by a termination phase. The goal of registration is to issue
credentials to a new subscriber. The new subscriber, C, presents sufficient identifying
and payment information to the vendor, V . The vendor returns a single
blinded certificate, which authorizes the client to later execute a single transaction
with that service.
In the certificate redemption phase, clients spend a certificate and execute
a transaction. At the end of the certificate redemption phase, the vendor issues
the client another blinded certificate. The vendor cannot link the new certificate
to the spent one, so he cannot use it to link transactions to one another.
We assume that the customer has an associated identifier C for each account,
whether or not his identity is actually known by the vendor. (He may in fact
have different identifiers for different accounts.) We use square braces to indicate
message authentication and curly braces to indicate message confidentiality.
Thus, '[X] K ' might refer to data X signed with key K or a keyed hash of X
using K. 'fXgK ' refers to X encrypted with key K. For our purposes both of
these are used to refer to mechanisms that also provide message integrity. We
use over-lining to indicate blinding: e.g., 'X' refers to the result of blinding X,
for use with the appropriate signature key.
3.3 Registration
[Request for certificate of type
The signature key in message 2 is the vendor's signature key for service S and
is only used to sign blinded hashes. A signed hash is a certificate. The service
key is also subject to periodic renewal. Service keys have published expiration
times. All certificates should be used or exchanged by that time. We will see that
there is no need to verify the structure of the blinded hashed nonce. If the client
substitutes anything inappropriate the result can only be an invalid certificate. In
message 1, the CreditAuth is a credit authorization which is returned by V when
the subscription is terminated. 2 To receive CreditAuth the client must produce
the secret K audit . CreditAuth can be held by V in the event C fails an audit.
(Audits will be described below.) The decision as to whether V actually draws
against this credit is a policy decision and is outside the scope of this paper.
The vendor must remember this sequence of messages, in case message 2 was
not received by the client. (See section 3.8.) For this registration protocol, the
In other protocol variants, CreditAuth can be a form of deposit. However, a traditional
deposit is sometimes undesirable since money is held for the entire term of the
service should consider message 2 to have been received after some period of
time. For (space) efficiency, an acknowledgement message may be added:
A customer may wish to make use of his subscription from multiple machines,
e.g., a base machine at his home or office and a laptop machine used when
traveling. It may be considered too much of an inconvenience to require the
customer to transport the current unspent certificate for each of his subscriptions
to his next likely platform of use. The vendor may therefore allow the customer
to obtain a number of initial certificates, possibly at no additional fee or for
a nominal charge. Similarly, the customer might be allowed to add an initial
certificate during his subscription if he begins using a new machine. The vendor
will need to decide which policy best meets his needs.
KCV is used to link protocol messages to one another. This becomes even
more important when certificates are redeemed for transactions. We will discuss
further assumptions and requirements regarding this linking after presenting the
certificate redemption protocol.
3.4 Certificate Redemption
When the customer wants to make use of the service, he conducts a certificate
redemption protocol with V . Certificate redemption consists of certificate
spending, transaction execution, and certificate renewal.
[Request for transaction of type S,h(N
Message 3 C
The transaction, message 3, is only done if message 2 was [Approved ] KCV .
The other two possibilities are discussed in the next sections. We delay the release
of the new certificate [h(N i+1 until the transaction ends, to prevent the client
from beginning a new certificate redemption protocol before the current one
completes. If the new certificate were released before the transaction, a subscriber
could run his own subscription server which would proxy transactions for his
clients.
KCV is a key that is used to protect the integrity of the session; C should
choose it to be unique for each session. If KCV should be compromised and then
used in a later session, an attacker could create her own second field in the first
message. By so doing, she could hijack the subscription.
Uniqueness of KCV is thus important to honest customers. But, session integrity
is important to the vendor as well. The vendor would like to be sure that
transaction queries are only processed in connection with a legitimate certificate
renewal. Unfortunately, KCV may not be enough by itself to guarantee integrity
of a protocol session. One or more customers might intentionally reuse the same
session key and share it with others. Anyone who has this key could then submit
queries integrity protected by it. As long as such a query is submitted during
an active legitimate session for which it is the session key, there is nothing in
the protocol that distinguishes this query from legitimate queries. This would
allow wide sharing of subscriptions by effectively bypassing certificate spend-
ing. Other aspects of protocol implementation might prevent this. But, to be
explicit, we will assume that uses of KCV are somehow rendered serial within
a protocol run. For example, KCV might be used in the protocol in a stream
cipher. Alternatively, KCV might be used as a secret nonce that is hashed with
plaintext. The plaintext and hash are sent in each message. Each time a message
is sent the nonce could be incremented. If something is done to make each use
of KCV in a protocol session unique and tied to previous uses within that run,
then sharing of subscriptions by this method becomes at least as inconvenient
as sharing them by passing the unspent certificate around. We make this same
assumption for all protocols mentioned in this paper that use a session key to
protect the integrity of the session.
As in the registration protocol, the vendor must remember the messages
sent in this protocol (except for the transaction messages) in case the client
never received the new (blinded) certificate. For efficiency, an acknowledgement
message may be added:
3.5 Not approved
If the response in message 2 is Not approved , then the protocol terminates.
The response to a request for service might be Not approved for a number of
reasons. These include that the certificate has been spent already, the nonce
does not match the submitted certificate, and the certificate is not valid for the
service requested. Alternatively, the certificate submitted might use an expired
key. If the client is a valid subscriber who never received an initial certificate
for the current key, this should be reflected in the vendor's records. The client
can then get an initial certificate in the usual manner. Off-line appeal will be
necessary for clients who feel they have been refused a legitimate transaction
request. We have designed these protocols under the assumption that appeals
will be automatically decided in favor of the client, as long as the client has not
appealed too many times.
3.6 Audit
If the response is Audit , then a special audit occurs in which C must present some
proof that he is a valid subscriber within a short period of time. In particular, C
must prove knowledge of K audit , which was sent to V during registration. If this
is satisfactory, a new certificate is issued. If it is not satisfactory or if C does not
comply, then the protocol terminates, and the certificate is logged along with a
note that it was used during a failed audit. In either case, no transaction takes
place so audited customers are not linked to specific transaction requests. The
main purpose of audits here is to serve as a secondary deterrent to sharing a
subscription with a nonsubscriber. (The primary deterrent is the inconvenience
of passing the certificate back and forth between those sharing as compared
with the cost of obtaining another subscription.) We will see that if anyone can
demonstrate knowledge of K audit and provides a valid certificate, then he can
terminate the corresponding subscription and the vendor will transfer CreditAuth
to him. Thus, C will not want to share K audit with anyone whom he does not
trust not to redeem the CreditAuth. If a customer is ever caught during an
audit having given away his certificate but not his K audit , he effectively forfeits
his subscription (and CreditAuth). This is because that certificate can never be
used again, and no new certificate is issued to continue the subscription. Off-line
appeal mechanisms may again be available for customers who, for example, lose
certificates or secret nonces.
The audit protocol is as follows:
[Request for transaction of type S,h(N
Similarly to the basic certificate redemption protocol, if message 4 is
fNot approvedgCV , then the protocol terminates. Unlike the basic certificate
redemption protocol there is no transaction phase. So, there is no direct link
between any identifying information revealed in the audit and any particular
transaction. However, by exercising the audit check frequently or at strategic
times, the vendor can learn both the client's usage frequency and patterns. This
might allow the vendor to correlate later transactions (and possibly earlier trans-
actions) with the particular client. The client might counter this limitation by
employing a masking scheme on top of the basic protocol. However, this can
considerably increase the load on the subscription service. Clients might also
counter such vendor analysis by delaying ordinary transaction requests for a
random amount of time following an audit. This places no extra burden on the
subscription service but may cause customers inconvenience substantially beyond
that of audits themselves. Since audits are a secondary deterrent to abuse, they
might be conducted infrequently. The tradeoffs between threats to anonymity
and the deterrence effect on subscription sharing are difficult to assess a priori.
Thus, exactly how frequent to make audits is currently difficult to say.
The service must remember the sequence of messages in any run of this
protocol in case of a broken connection. The messages may be remembered
until the associated key expires, or until some amount of time has elapsed, after
which the new certificate is assumed to have been received. For efficiency, an
acknowledgement message may be added:
We will see in the next section why customers will want to protect K audit .
In message 3 of the audit protocol we explicitly use KCV as an encryption key.
In other cases, we encrypted for the vendor using V . (In practice, the symmetric
would typically be used in favor of the computationally expensive
public key V .) However, it is essential that V not be used in message 3, since
that would allow a subscriber to share his subscription, and produce responses
to audit challenges without revealing his secret K audit to those he shared with.
3.7 Terminating a Subscription
Client initiated termination of a subscription is a variant of certificate redemp-
tion, however, it does not trigger an audit. Termination requires the client to
prove it knows K audit and has an unspent certificate. Termination has the effect
of passing the CreditAuth to the subscriber. V passes CreditAuth but one time.
[Request for transaction of type (S Termination),C] KCV
Refunds may be prorated based on the vendor's policy for early termina-
tion. Should the subscription include multiple chains of certificates (e.g., for a
workstation and a laptop) there should be one CreditAuth per chain.
In message 2, we encrypt using KCV since we do not require that the client
possess a private key.
As before, an acknowledgement message may be added for efficiency:
3.8 Recovering from Broken Connections
Protocols that break before the vendor receives the acknowledgement must be
replayed in their entirety (except for the actual transaction which is always
skipped), with the same session key, nonce, and blinding factor. The protocols
are designed not to release any new information when replayed.
Broken protocols are considered automatically acknowledged after some period
of time (i.e., the customer has that much time to recover from a broken
connection). After that period of time, they can no longer be replayed. This
is not crucial for the redemption protocol, but is crucial for the registration
protocol. After that period of time, the subscription may be charged for.
We will consider connection breaks occurring from the end of the protocol to
the beginning. If a connection breaks after a new certificate has been acknowledged
(message 5 in the Certificate Redemption protocol), the client can simply
initiate a new transaction with the new certificate. If a connection breaks after C
receives message 4 but before V receives message 5, the client can again simply
initiate a new transaction.
Before this point in the protocol the client will not yet have received a new
certificate. So, recovering from any connection breaks that occur prior to this
point in the protocol involve replaying the protocol. The vendor should keep a
record of each protocol run until he receives the acknowledgement in message
5. Upon replay, the client presents the same sequence of messages. The vendor
will identify the presented certificate as spent, and consult its recovery database.
If the protocol is recoverable (i.e., has not yet been acknowledged), the vendor
returns the stored response.
If the response in message 2 is Audit , V should keep a record of the protocol
run even if C properly identifies himself upon reestablishing the connection.
It may be that a cheater broke the connection and then quickly notified the
legitimate client of the audit. If some client breaks an audit protocol repeatedly
a vendor may become suspicious and may decide not renew his certificate.
Notice that the customer need never identify himself when a broken connection
occurs (unless an audit had already been stipulated by the vendor). Thus,
he need not worry about being associated with a given transaction.
Another kind of failure that affects our system is disk crash or other media
failure. It is unrealistic and unreasonable to expect customers to backup copies
of subscription information every time they redeem a certificate. (It is often unrealistic
to expect customers to make backups at all.) Therefore, customers must
be allowed to reinitialize a subscription after a disk crash. How often individuals
will be allowed to reinitialize over the course of a subscription is a policy decision
for individual vendors. Another option is to provide customers with (distinct)
backup initial certificates at registration, just as they may obtain initial certificates
for multiple machines. This allows them to recover from a disk crash
without re-registering (assuming they have kept backups separately); however,
it does provide additional subscription chains for the cost of one subscription.
3.9 Service Key Management
For unlinkable protocols to work, it is important that service keys not be "closely"
associated with clients. For example, we do not want the vendor to be able to
uniquely associate a service key with each client, which would enable the vendor
to associate transactions with clients.
Committing to Service Keys A straightforward technique to overcome this
potential vulnerability requires the vendor to publicly commit to all public authorization
keys. This can be achieved by publishing information, at regular
intervals, at a unique location "well known" to all potential clients of the ser-
vice. An example publication format for each service consists of the service type,
expiration time, and signature confirmation key for signatures associated with
this service.
Subscription Termination Other than as a general security precaution, the
primary reason to change service keys is to facilitate expiration of subscriptions.
When keys expire, our only current mechanism is to have clients obtain new certificates
just as they did when signing up for a service initially. Service expiration
can be structured in several different ways, each with advantages and disadvan-
tages. We will present some of these and briefly mention some of the tradeoffs.
Which is most acceptable will depend on particular aspects of application and
context. For the purposes of discussion let us assume that the standard period
of subscription is one year divided into months.
Subscription Expiry One option is to have annualized keys that start each
month. In other words, there are twelve valid service keys for the same service
at all times. This is convenient for the customer and similar to existing subscription
mechanisms; however, it partitions those using a service into twelve
groups, reducing the anonymity of customers accordingly. This may or may not
be a problem. If subscriptions are annualized to quarters this reduces the threat
to anonymity, but this might still be unacceptable. And, it reduces customer
flexibility about when subscriptions can begin.
An alternative is to have monthly keys good for all subscribers. Subscribers
obtain twelve seed certificates when they subscribe, one for use in each month
of the succeeding year. This does not reduce anonymity as the last option did.
On the other hand, it requires that customers keep track of the multiple certificates
and requires issuing certificates well in advance of their period of eligibility.
From the vendor's perspective, the threat of audit becomes much reduced since
a cheater will lose at most the current month's certificate. Relatedly, it is that
much easier to share a subscription-at least by monthly pieces. Thus, the inconvenience
deterrent is reduced slightly as well.
Another option is to have all subscriptions end in the same month. Someone
subscribing at other than the beginning of the fiscal year would pay a prorated
amount for his subscription. This avoids reductions in anonymity associated
with monthly annualized keys. It also avoids the reduced deterrence to cheating
associated with monthly keys. But, it reduces customer flexibility in choosing
the ending of the subscription. Another disadvantage to this approach is that
subscription renewal is now all concentrated at one point in the year, creating
extremely unbalanced load on the part of the system handling sign up and re-
newal. This would probably remain true even if renewing customers were allowed
to renew in advance. It could be diminished by splitting the year in half or even
further. This creates the partitioning reduction in anonymity already mentioned.
Early Termination of a Subscription Terminating a subscription early requires
proving that the user is a particular subscriber and spending a valid
certificate. He will not get a new one; so, there is is no way for him to continue
using the service. Notice that early termination can even be customized, for ex-
ample, so that it is available only to customer's who have already subscribed
for at least a year. (Recall that a customer reveals his identity, or pseudonym,
when he terminates early.) Prorating refunds for terminated subscriptions removes
one of the disadvantages of the third option for subscription expiration
described above.
We have been describing subscriber termination of a subscription. Vendor
termination of a particular subscriber or group is far more difficult. (It may
also be less important.) In our current approach the only way to terminate a
subscriber is to change the service key(s) for the remainder of his subscription
and require everyone else to reinitialize their certificates with the new key. This
creates tremendous expense and inconvenience equivalent to what would be necessary
if a service key were compromised.
3.10 Discussion
The protocols presented thus far have limitations in protecting against defraud-
ing the vendor by organizing a service to share subscriptions. It seems doubtful
that a practical solution exists to fully protect against this attack, given our goal
of unlinkable transactions. For example, subscriptions may be shared if the subscriber
runs a subscription proxy server. But, this makes sharing a centralized
activity, with the attendant complexity of running a new business. Such a business
has the overhead and complexity of marketing, advertising, and maintaining
service reliability. Perhaps more importantly, it has the potential disadvantage
of being a focus for legal attention. Finally, the vendor can take action against
the particular shared account if it shows up frequently in an audit.
If the registered subscriber is not running a subscription proxy service but
is lending his unspent certificate, what can be done to make sharing more cen-
tralized? In addition to the mechanisms already in place, the key is to require
intimate contact between the lender and the borrower. Sharing is inherently risky
to the lender because the borrower may never return the subscription. Thus the
lender should require a deposit. However, requiring a deposit or charging for
fraudulent activity has historically been a key element to detecting and limiting
fraud.
Another approach that forces lending to be centralized, which complements
the approach just presented, is to design the protocol so the borrower must
contact the lender on every transaction (if the lender does not want to share
all of his secrets). Currently, a borrower only needs to contact the lender when
audited (to get K audit ). Alternatively, one could modify the protocol to require
K audit to be indirectly present in the first message of every run of the certificate
redemption protocol. For example, the client could send a hash of the spent
certificate, K audit and a random number in message 1, the later two of which
must be revealed in the event of audit.
4 Applications of Unlinkable Serial Transactions
Until now we have been focused on basic subscription services as the application
of unlinkable serial transactions. We now explore both expansions of the basic
subscription application and other applications as well. We will simply describe
these applications without giving full details on how to adapt the unlinkable
serial transactions for them. Generally, it will be straightforward to see how to
do so.
4.1 Pay-per-use Within a Subscription
Certain transactions may require extra payment by a subscriber. Next, we describe
a means to allow pay-per-use within a subscription. The vendor becomes
a mint for simple, single denomination, digital tokens. The digital tokens are to
digital cash roughly as tokens in a game arcade are to coins. The vendor may
bill for these tokens by credit card, or some other mechanism.
During the transaction phase (message 3 in the certificate redemption proto-
col), the client spends previously purchased tokens. How do we guarantee that
the client pays the vendor for the pay-per-use transaction? Either the vendor
never releases the new blinded certificate (message he is payed or we
assume some protocol for fair exchange [10, 3]. The latter choice properly partitions
responsibility without complicating recovery.
There are alternatives to this protocol. For example, certificates could include
a credit balance, which must be periodically paid. Payment would be made as
a transaction. There is no harm in this transaction identifying the customer
because it is only for payment purposes. The main limitation on this approach is
that the credit balance is monotonically increasing. This may allow the vendor
to link transactions and even to tie them to particular customers.
4.2 Third-Party Subscription Management
Vendors may be interested in making available the anonymity afforded by our
approach but may be less enthusiastic about the necessary overhead of maintaining
a subscription, e.g., keeping track of spent certificates. Along with the
ordinary overhead of maintaining subscriptions, handling billing, etc., vendors
may choose to hire out the management of subscriptions. It is straightforward
to have the vendor simply forward transaction requests to a subscription management
service, which then negotiates the business (certificate management)
phase of the protocol with the customer. Once this is completed, the transaction
phase can proceed between the vendor and the customer as usual.
4.3 Multivendor Packages and Discount Services
For multivendor packages one can purchase what is effectively a book of coupons
good at a variety of individual vendors. The way a coupon book would work is
that vendors will authorize the package vendor to issue certificates for their
services. Customers then engage in a protocol to obtain the basic certificates.
If the coupons in the book are meant to be transferable, there is nothing
more to the protocol. If, however, they are not, we must add a serial unlinkable
feature to make sharing more cumbersome. In this case, when a customer submits
a certificate for a service he must also submit a package certificate. The
package certificate must be updated as in the basic protocol. Service certificates
are not to be updated: they can only be redeemed once. Vendors could all be authorized
with the necessary key to update the package certificate. Alternatively,
the processing of the certificates could be handled by the package issuer as in the
third-party application of unlinkable serial transactions just given. Notice that
individual vendors need not be capable themselves of producing coupons for
their own services. It is enough that they can confirm the signatures associated
with their services.
Package books such as just described often offer discounts over vendors' basic
rates as a sales incentive. Another form of discount is one that is made available
to members of some group. Unlinkable serial transactions are useful for allowing
someone to demonstrate such membership without revealing his or her identity.
Depending on the application, the various vendors offering discounts can sign
new certificates or signing can be reserved for some central membership service
in association with any request for discount at a vendor. The latter case is again
similar to the third-party application above.
4.4 Membership and Voting
The example just mentioned shows that the basic idea of unlinkable serial
transactions can have application outside of commercial concerns. Specifically
it should be useful for any application for which membership in some group
must be shown, and where the inconvenience of sharing a serial certificate and
the risk of audit outweighs the advantages of spoofing group membership. These
might include some applications requiring proof of age or residency.
As another example, consider a voter registration certificate. At voting time,
the voter spends his certificate, is issued a new certificate, and votes. The new
certificate is signed by a key that becomes valid after the current voting period
expires, so voters cannot vote twice. In this case, there is no possibility of sharing
the certificate for a single election. If there is concern that formerly eligible
voters continue to vote once their eligibility has expired, certificate keys could
be subject to occasional expiry between elections. Ineligible voters would then
be eliminated since they would be unable to register for new seed certificates.
5 Conclusion
In this paper we have presented a protocol that can be used for unlinkable serial
transactions. The protocol can be used in several types of commercial services,
including unlimited use subscriptions and those incorporating some kind of pay-
per-use transaction. Unlinkable serial transactions can also be used for multivendor
packages and discount services. And, they can be used for non-commercial
applications such as voter registration and proof of group membership. Although
individuals are anonymous during each unlinkable serial transaction, they can
be challenged to produce identification to prevent various kinds of fraud.
Our approach relies on anonymous communication: there is no sense in using
anonymous tokens, pseudonyms, etc., if identities are revealed by the communications
channel. For Web based commerce, the Anonymizer hides the identity of
clients. Onion routing also provides anonymity, but in addition protects against
traffic analysis and hides anonymity even if some of the nodes in the anonymity
service are compromised.
In this paper we have described means to prevent profiling by vendors. But,
profiles may be beneficial to both the customer and vendor, e.g., for marketing
purposes. Indeed, services such as Netangels and Firefly are available that build
customer profiles for this purpose but promise to protect customer privacy. It
might be complicated to incorporate such trusted intermediaries with the protocols
we have presented. But, decentralizing may ultimately provide better assurance
to customers. Profiles can be collected locally at a user's workstation.
This lets individuals control their own profiles. An individual could contact a
marketer through an anonymous connection (cf. Section 2.2) and request advertisements
suited to his profile. Once he closes the connection the marketer can
no longer contact him.
Our approach is based on primitives supporting e-cash but is designed to
function in a credit card type commercial infrastructure as well. By manipulating
what must be trusted and by whom, as compared with their more common
applications, we are also able to simplify the use of such primitives in our protocols
--R
"Trustee-based Tracing Extensions to Anonymous Cash and the Making of Anonymous Change"
"Anonymous Atomic Trans- actions"
"Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms"
"Security without Transaction Systems to Make Big Brother Obsolete"
"Untraceable Electronic Cash"
http://www.
"Mixmaster and Remailer Attacks"
Protection of Location Information in Mobile IP
"Fair Exchange with a Semi-Trusted Third Party"
"Towards Provably Secure Efficient Electronic Cash"
"Mixing Email with Babel"
"Universal Electronic Cash"
Untraceable Communication with Very Small Bandwidth Overhead
"Anonymous Communication and Anonymous Cash"
Secure Electronic Transaction (SET) Specification.
Anonymous Connections and Onion Routing
--TR
transaction systems to make big brother obsolete
Parallel program design: a foundation
Untraceable electronic cash
An identity-based key-exchange protocol
Fair exchange with a semi-trusted third party (extended abstract)
Crowds
Onion routing
Trustee-based tracing extensions to anonymous cash and the making of anonymous change
On secure and pseudonymous client-relationships with multiple servers
Electronic voting
Untraceable electronic mail, return addresses, and digital pseudonyms
Handbook of Applied Cryptography
ISDN-MIXes
CSP and Anonymity
Universal Electronic Cash
Anonymous Communication and Anonymous Cash
A Practical Secret Voting Scheme for Large Scale Elections
Provably Secure Blind Signature Schemes
Unlinkable Serial Transactions
Anonymous Authentication of Membership in Dynamic Groups
Group Principals and the Formalization of Anonymity
Mixing Email with Babel
--CTR
D. Critchlow , N. Zhang, Security enhanced accountable anonymous PKI certificates for mobile e-commerce, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.45 n.4, p.483-503, 15 July 2004
Joachim Biskup , Ulrich Flegel, Threshold-based identity recovery for privacy enhanced applications, Proceedings of the 7th ACM conference on Computer and communications security, p.71-79, November 01-04, 2000, Athens, Greece
Kalvenes , Amit Basu, Design of Robust Business-to-Business Electronic Marketplaces with Guaranteed Privacy, Management Science, v.52 n.11, p.1721-1736, November 2006
Jan Camenisch , Els Van Herreweghen, Design and implementation of the idemix anonymous credential system, Proceedings of the 9th ACM conference on Computer and communications security, November 18-22, 2002, Washington, DC, USA
Marina Blanton , Mikhail J. Atallah, Provable bounds for portable and flexible privacy-preserving access, Proceedings of the tenth ACM symposium on Access control models and technologies, June 01-03, 2005, Stockholm, Sweden
Marina Blanton , Mikhail Atallah, Succinct representation of flexible and privacy-preserving access rights, The VLDB Journal The International Journal on Very Large Data Bases, v.15 n.4, p.334-354, November 2006
Pino Persiano , Ivan Visconti, A secure and private system for subscription-based remote services, ACM Transactions on Information and System Security (TISSEC), v.6 n.4, p.472-500, November
Susan J. Chinburg , Ramesh Sharda , Mark Weiser, Establishing the business value of network security using analytical hierarchy process, Creating business value with information technology: challenges and solutions, Idea Group Publishing, Hershey, PA,
Premkumar T. Devanbu , Stuart Stubblebine, Software engineering for security: a roadmap, Proceedings of the Conference on The Future of Software Engineering, p.227-239, June 04-11, 2000, Limerick, Ireland | cryptographic protocols;anoymity;blinding;unlinkable serial transactions |
330386 | On secure and pseudonymous client-relationships with multiple servers. | This paper introduces a cryptographic engine, Janus, which assists clients in establishing and maintaining secure and pseudonymous relationships with multiple servers. The setting is such that clients reside on a particular subnet (e.g., corporate intranet, ISP) and the servers reside anywhere on the Internet. The Janus engine allows each client-server relationship to use either weak or strong authentication on each interaction. At the same time, each interaction preserves privacy by neither revealing a clients true identity (except for the subnet) nor the set of servers with which a particular client interacts. Furthermore, clients do not need any secure long-term memory, enabling scalability and mobility. The interaction model extends to allow servers to send data back to clients via e-mail at a later date. Hence, our results complement the functionality of current network anonymity tools and remailers. The paper also describes the design and implementation of the Lucent Personalized Web Assistant (LPWA), which is a practical system that provides secure and pseudonymous relations with multiple servers on the Internet. LPWA employs the Janus function to generate site-specific person?, which consist of alias usernames, passwords, and e-mail addresses. | Introduction
We consider the following problem: there is a set
of clients located on a particular subnet and a set
of servers on the Internet. For example, the set
of clients could be employees on a company's intranet
or subscribers of an ISP and the servers could
be Web-sites. See Figure 1, where c i are clients
and s j are server. A client wishes to establish a
persistent relationship with some (or all) of these
servers, such that in all subsequent interactions (1)
the client can be recognized and (2) either weak or
strong authentication can be used. At the same
time, clients may not want to reveal their true identity
nor enable these servers to determine the set
of servers each client has interacted with so far (es-
tablishing a dossier). This last property is often
called pseudonymity to denote persistent anonymity.
Equivalently, a client does not want a server to infer
through a relationship more than the subnet,
on which the client is located nor connect different
relationships to the same client. This paper introduces
a client-based cryptographic engine, which
allows a client to efficiently and transparently establish
and maintain such relationships using a single
secret passphrase. Finally, we extend our setting to
include the possibility of a server sending data via
e-mail to a client.
We consider the specification and construction of
a cryptographic function that is designed to assist
in obtaining the above goal. Such a function needs
to provide a client, given a single passphrase, with
either a password (weak authentication) or a secret
(strong authentication) for each relation-
ship. Furthermore, a username might be needed
too, by which a client is (publicly) known at a server.
Such passwords, secret keys, and usernames should
neither reveal the client's true identity nor enable
servers to establish a dossier on the client. We name
such a cryptographic function (engine) the Janus
function (engine). We will briefly review that simple
choices for Janus , such as collision-resistant hash
function are not secure for our purposes and conse-
quently, we will show a Janus function that is more
robust. We will also show how to implement a mailbox
system on the client side, such that a server can
send e-mail to a client without requiring any more
information than for client authentication.
1.1 Related Work and Positioning of
our Work
Network anonymity is being extensively studied
(see, e.g., [PW85, GWB97]). For example, Simon
in [S96] gives a precise definition for an Anonymous
Exchange Protocol allowing parties to send individual
messages to each other anonymously and to reply
to a received message. Implementation efforts
for approximating anonymous networks are being
carried out by several research groups (e.g., anonymous
routing [SGR97] and anonymous Web traffic
[SGR97, RR97]). Besides that, there are several
anonymous remailers available for either e-mail
communication (see, e.g., [GWB97, GT96, B96,
E96]) or Web browsing (see, e.g., [Anon]). We will
discuss some of these in more detail later.
We view our goal as complementary: All of the
above work tries to find methods and systems to
make the Internet an (approximate) anonymous net-
work. This is a hard task and consequently the resulting
tools are rather difficult to use and carry
some performance penalties. We focus on a method
for assisting a client to interact with multiple servers
easily and efficiently, such that the server cannot infer
the identity of the clients among all clients in a
given subnet, but at the same time the client can
be recognized and authenticated on repeat visits.
We do not address communication between a sub-net
and a server. Consequently, a server can easily
obtain the particular subnet in which a client is lo-
cated. In many cases, this degree of anonymity is
sufficient, for example, if the client is a subscriber
of a large ISP, or an employee of a large company.
In the language of Reiter and Rubin ([RR97]), the
anonymity of such a client is somewhere between
probable innocence and beyond suspicion. Alterna-
tively, our method can be used in conjunction with
existing remailers to enable a client to interact with
a server without revealing the particular subnet.
We elaborate on this point in Section 2 for client-initiated
traffic and in Section 4.3 for server initiated
traffic. The work closest in spirit to the Janus engine
are the visionary papers of Chaum [C81, C85]
on digital pseudonyms.
In [GGMM97], we described the design and impl-
mentation of a Web-proxy which assists clients with
registering at multiple Web-servers. In this paper,
we focus on a new, simpler, and provably correct
construction of the Janus engine, a new and different
method of conveying anonymous email that
greatly reduces the required trust in the interme-
diary, and a discussion of moving features to shift
trust from a proxy to the client's machine. The latter
allows, for example, to integrate a Janus engine
with the P3P proposal, giving the client the power
to use pseudonymous P3P personae (see Section 5).
Thus, our methods and design are applicable to any
client server interactions, well beyond the proxied
Web browsing for server registration of [GGMM97].
Organization of the rest of the paper: In Section
2 we describe our interaction model and our
function requirements. Section 3 contains a detailed
description of the Janus function. Section 4 extends
the model of interaction to allow servers to send
data to clients' anonymous mailboxes. Section 5
presents various applications and configurations and
discusses some of the trade-offs involved.
Model and Specifications
We present here the framework for interaction between
clients and servers, and the way in which the
Janus engine is incorporated within such interac-
tion. There is a set of clients
and a set of servers g. Each
client can interact with any server. Interaction can
take place in one of the following two ways:
ffl Client initiated: A client c i decides to contact a
server s j . The server s j requires c i to present a
username and a password (secret shared key) at
the beginning of this interaction to be used for
identification and weak (strong) authentication
on repeat visits.
ffl Server initiated: A server s j decides to send
some data to a client c i which has contacted s j
at some earlier point in time (using the client's
username).
Individual clients may wish to remain anonymous
in the above interaction; i.e., a client does not want
to reveal his/her real identity c i to a server (beyond
the particular subnet, in which c i resides).
Client-initiated interaction: A client c i , on a
first visit presents to a server s j an alias a i;j , which
includes a username and either a password or a
key. On repeat visits a client simply presents the
password again for weak authentication or uses the
key with a message authentication code (MAC) for
strong authentication (see [MMS97]). We would like
the alias a i;j to depend on c i , s j , and p i a secret
client passphrase. Since we want this translation of
names to be computable, we define a function which
takes c i , p i and s j , and returns an alias a i;j . This
function is called the Janus function, and is denoted
J . In order to be useful in this context, the Janus
function has to fulfill a number of properties:
1. Form properties: For each server, J provides
each client with a consistent alias, so that
a client, by giving her unique identity and
Firewall
ISP Access
c3
Janus
mailboxes
c1, p1
c2, p2
c3, p3
a11, a21, a31
a12, a22, a32
a13, a23, a33
a14, a24, a34,
engine
Gateway
Figure
1: Client Server Configuration with Janus
Engine on the Gateway
passphrase, can be recognized and authenticated
on repeat visits. J should be efficiently
computable given c i , p i , and s j . The alias needs
to be accepted by the server; e.g., each of its
components must have appropriate length and
range.
2. Secrecy of
passwords/keys: Alias passwords/keys remain
secret at all times. In particular, an alias username
does not reveal information on any alias
password/key.
3. Uniqueness of aliases among clients & Impersonation
resistance: Given a client's identity
and/or his alias username on a server s j a third
party can guess the corresponding password
only with negligible probability. Moreover, the
distribution of the alias usernames should be
such that only with negligible probability two
different users have the same alias username on
the same server.
4. Anonymity / Uncheckability of clients: The
identity of the client is kept secret; that is,
a server, or a coalition of servers, cannot determine
the true identity of the client from
his/her alias(es). Furthermore, it is not checkable
whether a particular client is registered at
a given server.
5. Modular security & Protection from creation of
dossiers: An alias of a client for one server does
not reveal any information about an alias of
the same client for another server. This also
implies that a coalition of servers is unable to
Firewall
ISP Access
c3 mailboxes
a11, a21, a31
a12, a22, a32
a13, a23, a33
a14, a24, a34,
Gateway
proxy
a11, a12, a13,a14
a21,a22
a23,a24
Figure
2: Client Server Configuration with Local
Janus Engine
build a client's profile (dossier) based on the
set of servers with which he/she interacted by
simply observing and collecting aliases.
One possible physical location to implement the
Janus function is on the gateway. See Figure 1,
where we refer to the implementation as the Janus
engine. Clients provide their identity c i and secret
passphrase p i to the gateway, where the translation
takes place. An alternative location for the Janus
engine is on each client's machine, as depicted in
Figure
2, where the locally generated aliases are sent
to the server via the gateway. See Section 5 for a
discussion of trade-offs. The following property is
of practical significance, as it provides robustness
against the possibility to recover privacy-sensitive
information "after the fact":
6. Forward Secrecy: When a client is not interacting
with a server, the Janus engine does
not maintain in memory any information that
may compromise on the above properties of the
Janus function. This excludes the simple approach
of implementing a Janus function by a
look-up table.
Consequently, an entity tapping into a (gateway)
machine on the subnet, cannot infer any useful in-
formation, unless it captures a client's passphrase
(which is never transmitted in Figure 2). Forward
Secrecy also enables a client to use different Janus
engines within her subnet, given that she remembers
her passphrase (mobility).
If a client desires to hide her subnet from a server,
she can easily combine our method with other
anonymity tools. For example, if she contacts a
server via the Web (HTTP), she can use either
Onion Routing ( [SGR97]) or Crowds ( [RR97]). In
the first case, the connection from the gateway to
the server is routed and encrypted similar to the
methods used by type I/II remailers (see also Section
4.3) and in the second case her connection is
"randomly" routed among members (on different
subnets) of a crowd.
Server-initiated interaction: A server knows a
client only by the alias presented in a previous,
client-initiated interaction. We allow a server s j
wishing to send data to client c i , known to it as
a i;j , to send an e-mail message to the corresponding
subnet, addressed to the username component
u of a i;j . The message is received by the Janus en-
gine, see Figure 1, which will make sure that the
message is delivered to the appropriate client, or is
stored by the gateway, until a local Janus engine retrieves
the messages, as in Figure 2. Our scheme of
storing mailboxes maintains forward secrecy. More
details are in Section 4, where it is also shown, how
server-initiated interaction can be combined with
pseudonymous remailers.
3 The Janus function
In this section we present the Janus function in de-
tail. We first develop our requirements, then discuss
some possible constructions.
The Setting of the Janus-function: A client inputs
her identity c i , her secret passphrase p i , the
identity of the server s j , and a tag t indicating
the purpose of the resulting value. Depending on
this tag, the Janus function returns either an alias-
username a u
i;j for the user c i on the server s j or the
corresponding password a p
i;j . Note that it would be
easily possible to extent this function to generate
secret values for other purposes (see also [MMS97]).
Adversarial Model: We assume that a client c i
does not reveal her passphrase p i to anyone 1 . How-
ever, we allow that an adversary E can collect pairs
(a u
i;j ) and the corresponding server names s j .
Note that registered alias usernames may be publicly
available on some servers and that we can not
assume that all servers can be trusted or store the
passwords securely. In some cases it might even be
possible to deduce the clients name c i (e.g. from the
data exchanged during a session, or simply because
the client wishes to disclose his identity) and we also
1 except to the Janus engine
have to assume that a chosen message attack is possible
(e.g. by suggesting to a client c i to register
on a specific server). Roughly speaking, we will require
that an adversary does not learn more useful
information from the Janus function than he would
learn if the client would chose all his passphrases
and aliases randomly.
3.1 Janus function specifications
We say that a client c i is corrupted
if the adversary E has been able to find p i . We say
that c i is opened with respect to a server s j if the
(a u
i;j ) has been computed and used. (Note
that if c i has been opened with respect to a server
then an adversary E may know only (a u
but not necessarily c i .) We say that c i has been
identifiably opened with respect to a server s j
if an adversary knows (a u
together with the
corresponding c i .
Let C be the set of clients, S be the set of servers,
P be the set of allowable client secret passwords,
AU be the set of allowable alias usernames, and let
AP be the set of allowable alias passwords. Let
k be the security parameter of our Janus function
meaning that a successful attack requires about 2 k
operations on average. Let the Janus function be
Since usernames and passwords normally consist
of a restricted set of printable characters we also
need two functions that simply convert general k-bit
strings into an appropriate set of ASCII strings.
Thus
be two injective functions that map k-bit strings
into the set of allowable usernames and passwords.
. The client's identity a u
i;j and
password a p
i;j for the server s j are then computed
by
a u
a p
The two functions U and P are publicly known,
easy to compute and we may assume easy to invert.
Thus knowing U (x) of some x is as good as knowing
x. In particular if an adversary can guess U (x) then
he can guess x with the same probability.
Following our adversarial model, the Janus function
has to satisfy the following requirement:
1. Secrecy: Given a server s j , an uncorrupted and
not identifiably opened client c i and t 2 fp; ug,
the adversary E cannot find J
nonnegligible probability even under a chosen
message attack, that is under the assumption
that the adversary can get J
any s j 0
2. Anonymity: Given a server s j , two uncorrupted
clients that are not opened with
respect to s j and t 2 fp; ug. Then an adversary
cannot distinguish J
even under a chosen message attack, that is
under the assumption that the adversary can
get J any list of arguments not
used above.
It should be noted that the two requirements are indeed
different. For example if we were to implement
the function J using a digital signature scheme,
i.e., J
then the first
requirement would be satisfied, but the second one
clearly isn't since the clients identity could be found
by checking signatures. One the other hand a constant
function satisfies the second requirement, but
not the first one.
Moreover, it should be noted that our requirements
are stated in a rather general form. In particular,
the first requirement states that no result of the
Janus function can be derived from other results.
This implies the secrecy of passwords, impersonation
resistance and modular security.
3.2 Possible Constructions for J
Assume that ' c is the maximal bit length of c i , ' s
the maximal length of s j , ' p the maximal length
of p i and ' t the number of bits required to encode
the tag t. Throughout this section we will assume
that all inputs c are padded to their maximal
length. This will assure that the string c i jjs j jjp i jjt
is not ambiguous.
Given the function specification, an ideal construction
would be via a pseudorandom function f :
. Unfortunately, there
are no known implementations of pseudorandom
functions. Typically, they are approximated via either
strong hash functions or message authentication
codes (MAC), even though, strictly speaking,
the definitions of these primitives do not require
them to be pseudorandom. In the following sec-
tions, we are going to examine both options and
give some justifications for preferring a MAC based
solution over other tempting constructions.
3.2.1 Using hash functions
One possible attempt might be to use the hash of
the inputs h(c i jjs j jjp i jjt) as our function. However,
hash functions are not designed to keep their inputs
secret. Even if it is hard to invert the hash function
for a given input, it might still be possible to derive
different servers s j .
A hash function that is weak in that respect can
for example be found in [A93]. Some apparently
better constructions for keyed functions based on
hash functions have been proposed (e.g., MDx-MAC
[PO95]). But our requirements are quite different
from the goals of these constructions. Therefore,
we decided not to use hash functions for our Janus
function.
3.2.2 MACs
A much more promising approach is the use of message
authentication codes (MACs). In particular if
MACK (x) denotes the MAC of the message x under
the key K then we can define a potential Janus
function as
This approach has the advantage that some of our
requirements are already met. In particular if the
MAC is secure then the secrecy of passwords and
impersonation resistance for the Janus function are
implied. Other requirements, like consistency, efficient
computation of the function, single secret and
acceptability, are just consequences of the actual implementation
of the Janus function and the mappings
U and P . The only additional requirement
is the anonymity of clients.
To this end, we consider the following result of Bel-
lare, Kilian, and Rogaway ([BKR94]): Let
xm be a message consisting of m blocks x i
of size ' bits. Given a block cipher
denotes the key, define the CBC-
MAC by
Assume that an adversary can distinguish a MACK
from a random function with an advantage ffl by running
an algorithm in time t and making q queries to
an oracle that evaluates either MACK or the random
function. Then the adversary can distinguish
fK from a random function running an algorithm
of about the same size and time complexity having
an advantage of ffl \Gamma Hence, if we use
CBC-MACs, then anonymity is just a consequence
of [BKR94].
If the underlying block cipher fK behaves like a
pseudorandom function then the above result shows
that a birthday attack is almost the best possible at-
tack. In particular an attacker can do not much better
than collecting outputs of the function and hoping
for an internal collision, i.e. two messages x; y
such that fK (f K
for some i ! m. In that case the attacker would
know that replacing the first i blocks in any message
starting with x would result in
another message having the same hash value.
We thus caution that a block cipher with '-bit block
size should not be used if an attacker can collect
about MACs. Concretely, block ciphers
having 64-bit blocks, such as DES, triple-DES, or
IDEA [LM91] should not be used if it is feasible for
an attacker to collect about 2 32 samples, thus giving
only marginal security to the overall scheme.
However, newer block ciphers, such as SQUARE
[DKR97] and one variant of RC5 [R95] have 128-
bit block sizes and are therefore more suitable in
this case.
Anonymous mailbox system
We first briefly recall the history of anonymous re-
mailers, then show our anonymous mailbox system
and finally inidcate how the two can be usefully
combined.
4.1 Brief History of Anonymous E-mail
Tools for anonymous e-mail communication have
been around for a few years by now (see,
e.g, [GWB97, B96, GT96, E96]. Early anonymous
remailers (Type 0, e.g., Anon.penet.fi) accepted
e-mail messages by a user, translated them to a
unique ID and forwarded them to the intended re-
cipient. The recipient could use the ID to reply
to the sender of the message. The level of security
of this type of remailer was rather low, since it
did not use encryption and kept a plain text (trans-
lation) database. A next (and still current) generation
of remailers (Type I, Cypherpunk remail-
ers) simply take a user's e-mail message, strip off
all headers and send it to the intended recipient.
The user can furthermore encrypt the message before
sending it and the remailer will decrypt the
message before processing it. For enhanced secu-
rity, a user can chain such remailers. In order to
use a chain r of remailers, a user first encrypts
the message for r 2 and then for r 1 . (see also the efforts
on Onion Routing, [SGR97]). Still, even such
a scheme is susceptible to traffic analysis, spam and
replay attacks. Mixmaster remailers (Type II) are
designed to withstand even these elaborate attacks.
This development of remailer yields more and more
intraceable way of sending messages, but it gives
no way to reply to a message. This gives rise to
"pseudonymous / nym" remailers, which, in a nut-
shell, work as follows: A user chooses a pseudonym
(nym), which has to be unused (at that remailer).
Then the user creates a public/private key pair for
that nym. When sending a message, the user encrypts
with the server's public key and signs a message
with her private key. The recipient can reply
to the message using the nym. Some remailers store
the message and the original sender can retrieve this
mail by sending a signed command to the remailer,
other remailers directly forward the message by using
a "reply block", an encrypted file with the user's
real email.
The ultimate goal of all these remailers is to enable
e-mail communication as if the Internet were
an anonymous network. This is a very hard task
and consequently these tools induce a performance
penalty and are rather difficult to use.
4.2 Anonymous Mailboxes
In this section, we show how to construct an anonymous
mailbox system within our model. As before,
we assume that the users are in a particular sub-
net. Our goal is to provide these users (clients) with
a transparent way to give e-mail addresses to outside
parties (servers), which maintain the properties
of the aliases (anonymity, protection from dossiers,
etc). For example, a client might want to register
at a (Web-site) server for mailing-lists, personalized
news, etc. Such an e-mail address provides a server
with the means to initiate interaction with a client
by sending an e-mail message to the client.
We first consider a setting with the Janus engine on
the gateway (Figure 1). We propose that the Janus
engine computes "a u
i;j @subnet-domain" as c i 's e-mail
address to be used with s j . We further suggest
storing a mailbox for each such active
the subnet's gateway, such that an owner of a mailbox
is only identified by the respective alias. Messages
are stored in these mailboxes, passively awaiting
clients to access them for retrieval. We require
that (1) given a previous, client-initiated interac-
tion, a server can send data to the mailbox created
for the (client, server) pair, (2) the Janus engine
(upon being presented with client c i
retrieve the messages in all of her mailboxes without
remembering a corresponding list of servers, (3)
neither the Janus engine nor the mailboxes compromise
on the "Forward Secrecy" property. We show
that the Janus function can be used to overcome
the apparent contradiction of requirements (2) and
(3). Note that the secrecy of the actual data stored
within a mailbox is an orthogonal issue and can be
solved, for example, by using PGP. For the setting
of a Janus engine on each client (Figure 2), most of
the scheme above remains unchanged with one important
exception: When a client wants to retrieve
her messages, the local Janus engine instructs the
gatewaway, which mailboxes to access and hence p i
is never revealed to the gateway.
Data Structures for
a m
m)), where we extend J to
have m as a new tag for "mail index", n i an integer
indexing c i mailboxes, and M a corresponding
injective function to map the output of J into a suitable
range. We explain the extensions in turn below.
The following record R is stored with the
mailbox. R has three fields: (1) R alias = a u
, . The argument n i in (2)
indicates the index of the mailbox created for client
. The record R (and consequently
the mailbox) can be accessed both via R alias
or R index . The R alias field contains the name of the
mailbox that is used for messages sent from s j to the
client c i . A second data structure, stored together
with the mailboxes, holds a counter C i for each of
the clients is the number of mailboxes the
client established so far. These counters
are initialized to 0. Note that . The
counter itself is indexed by a m
i;0 , so that the Janus
engine, upon being presented with
find it.
Creating a Mailbox: Whenever the client c i instructs
the Janus engine to give out an e-mail address
for s j , the engine checks if a record R with
i;j already exists in the first data struc-
ture. If it does not exist, then the engine retrieves
the counter C i by accessing the second data structure
with the key a m
i;0 . If no C i is found, it is initialized
to zero. The counter C i is incremented and
a new record is R created,
Afterwards, the engine
stores the updated value of C i in the second data
structure with key a m
i;0 . Finally, the Janus engine
create a new mailbox under the name of R alias .
Retrieving Mail: Whenever client c i connects to
the Janus engine, it will retrieve all of c i 's accumulated
e-mail messages. The engine first retrieves the
counter C i by accessing the second data structure
with the key a m
i;0 . Then it retrieves all records R
. For each such
and presents it, together with R s , to the client c i .
The above scheme constitutes a service to store mail
for any client and allows a client c i to retrieve all her
mail upon presenting uncorrupted
and not identifiably opened with respect to server
do better than guessing
the identity of the corresponding mailbox. Further-
more, given any two such mailboxes, E cannot do
better than guessing whether they have the same
owner. This is a simple consequence of the properties
of the Janus function J .
The above system can easily be extended to allow
a client to actively send e-mail to servers using the
Janus engine to generate a different address depending
on the server.
4.3 Combining our Solution with
Pseudonymous Remailers
When we allow the adversary to execute more elaborate
attacks (than we introduced in our model of
Section 3), such as eavesdropping or traffic analysis,
a client visiting several servers within a short period
of time, might become vulnerable to correlation
and building of dossiers (albeit not to compromise of
anonymity). Also, if a client happens to reside on a
small subnet, the subnet's population might not be
large enough to protect her identity. In these cases,
it makes sense to combine our method with anonymous
remailers or routing (for Web traffic) for enhanced
protection: We can view the Janus engine as
a client's ``front end'' to a pseudonymous remailer.
It computes the different nyms on a client's behalf
and presents them to the remailer. It manages all
the client's mailboxes and presents incoming messages
to the client. It also manages a client's pub-
lic/private keys for each nym. Furthermore, even
the remailer closest to the client (of a possible chain)
can neither infer the client's identity nor correlate
different aliases. All this remailer sees (when decrypting
a reply block) is the client's alias e-mail
address.
5 Trade-Offs and Applications
In this secton we examine the trade-off between the
configurations corresponding to Figure 1, which we
refer to as the gateway approach and to Figure 2,
which we refer to as the local approach. We then
present a few concrete applications.
The basic advantage of the local approach is that
the Janus functionality is pulled all the way to
the client's machine, minimizing outside trust.
Thus, the client does not have to reveal her secret
passphrase to another machine (the gateway). A
client also has the flexibility to choose a mailbox location
outside her own subnet, minimizing the trust
in the subnet (e.g., the client's ISP). There are also a
number of scenarios, where the Janus functionality
is required to be on the client's machine: For exam-
ple, in the realm of Web browsing, the Janus engine
can be integrated with the Personal Privacy Preferences
standard proposal to make a P3P persona
(see [P3P]) pseudonymous: P3P enables Web
sites to express privacy practices and clients to express
their preferences about those practices. A P3P
interaction will result in an agreement between the
service and the client regarding the practices associated
with a client's implicit (i.e., click stream) or
explicit (i.e., client answered) data. The latter is
taken from data stored in a repository on the client's
machine, so that the client need not repeatedly enter
frequently solicited information. A persona is
the combination of a set of client preferences and
P3P data. Currently, P3P does not have any mechanisms
to assist clients to create pseudonymous per-
sonae. For example, a client can choose whether to
reveal his/her real e-mail address, stored in the the
repository. If the e-mail address is not revealed, the
Web-site cannot communicate with the client and
if the e-mail address is indeed revealed, the Web-site
has a very good indication on the identity of
the visitor. Using a Janus engine provides a new
and useful middle ground: The data in repository
corresponding to usernames, passwords, e-mail ad-
dresses, and possibly other fields can be replaced by
macros which, by calling the Janus engine, expand
to different values for different Web-sites and thus
create a pseudonymous personae for the client.
For the case of the gateway approach, we note that
the Janus engine does not have to be distributed
throughout the subnet. Thus, the clients do not
have to download or install any software and no
maintaince is required, also giving scalability: when
the population in the subnet grows, it enables to
easily add gateway machines (helped by Forward
Secrecy property). The proxy might also provide
alias management capabilities in the case where the
gateway is for a corporate intranet: Such capabilities
might include two clients to share their aliases
for all the servers, a client to transfer one or more of
his/her aliases to another client, or even two clients
to selectively share some of their aliases. For ex-
ample, when going on vacation, a manager might
use such functionality to have an assistant take over
some of his daily correspondence. Such alias management
functions have the potential to considerably
simplify login account and e-mail management
in big intranets. We note that to achieve this po-
tential, state has to be added to the proxy design,
which goes beyond the scope of this paper.
5.1 Applications
Web browsing. There is a growing number of
web-sites that allow, or require, users to establish
an account (via a username and password) before
accessing the information stored on that site. This
allows the web-site to maintain a user's personal
preferences and profiles and to offer personalized
service. The Lucent Personalized Web Assistant is
an intermediary Web proxy, which uses a Janus engine
to translate a user's information (user's e-mail
and passphrase) into an alias (username, password,
email) for each web-site. Moreover, this alias is
also used by the web-site to send e-mail back to
a user. More details of this work can be found
in [GGMM97] and at http://lpwa.com:8000/. The
intended configuration for this project is the gateway
approach of Figure 1. We note that such
concrete applications typically execute in conjunction
with many other mechanisms. For instance,
Web browsing based on the HTTP protocol in-
terfaces, among others, with SSL for encrypting
the communication and with Java and JavaScript
for downloadable executables. Each such interface
can potentially undermine the pseudonymity
of the client-server interaction. In the case of SSL,
the proxy can spoof SSL on behalf of the internal
client (see [SSL-FAQ]). The proxy can initiate
SSL between itself and other servers and thus
maintain the client's pseudonymity. Both Java applets
and JavaScript scripts, when downloaded from
a server by a client, can potentially obtain compromising
client information. Research is being conducted
which might lead to include customizable security
policies into these languages (see [GMPS97,
AM98]). A client can then choose a policy strict
enough to preserve his/her pseudonymity. Another
approach is to bundle an LPWA proxy with
an applet/script blocking proxy, as described, e.g.,
in [MRR97]. In summary, it is necessary to consider
all possible interfaces, and offer encompassing
solutions to clients.
Authenticated Web-traffic. Consider a Web
site which offers repeated authenticated personalized
stock quotes to each of its subscribers. The value
of a single transaction (e.g., delivery of a web-page
with a customized set of quotes) does not warrant
the cost of executing a handshake and key distribution
protocol. A lightweight security framework for
extended relationships between clients and servers
was recently proposed [MMS97]. The Janus engine
provides a persistent client-side generated shared
key for each server, used in application-layer primi-
tives. Hence, no long-term secure memory is needed
on the client-side, enabling scalability and mobility.
Acknowledgments
We thank David M. Kristol for his insights and for
his many contributions to the design implementation
of LPWA, which uses the Janus engine. We
are grateful to Russell Brand for thought-provoking
discussions.
--R
The classification of hash func- tions
Security of Web Browser Scripting Languagea: Vulnerabilities
The Anonymizer.
FAQ, http://wwww.
The security of cipher block chaining
Untraceable Electronic Mail
Transaction systems to make big brother obso- lete
The block cipher SQUARE
Anonymity and Privacy on the Internet.
How to make personalized Web browsing simple
Mixing email with ba- bel
Blocking Java Applets at the Firewall
Networks without user observability - design options
P3P Architecture Working Group
The RC5 encryption algorithm
Crowds: Anonymous Web Transactions.
Springer Verlag LNCS 1109
IEEE Symposium on Security and Privacy
--TR
transaction systems to make big brother obsolete
Networks without user observabilityMYAMPERSANDmdash;design options
Crowds
Onion routing
Consistent, yet anonymous, Web access with LPWA
The platform for privacy preferences
Untraceable electronic mail, return addresses, and digital pseudonyms
The Security of Cipher Block Chaining
MDx-MAC and Building Fast MACs from Hash Functions
Anonymous Communication and Anonymous Cash
How to Make Personalized Web Browising Simple, Secure, and Anonymous
Privacy-enhancing technologies for the Internet
Mixing Email with Babel
Blocking Java Applets at the Firewall
--CTR
Robert M. Arlein , Ben Jai , Markus Jakobsson , Fabian Monrose , Michael K. Reiter, Privacy-preserving global customization, Proceedings of the 2nd ACM conference on Electronic commerce, p.176-184, October 17-20, 2000, Minneapolis, Minnesota, United States
Blake Ross , Collin Jackson , Nick Miyake , Dan Boneh , John C. Mitchell, Stronger password authentication using browser extensions, Proceedings of the 14th conference on USENIX Security Symposium, p.2-2, July 31-August 05, 2005, Baltimore, MD
Jasmine Novak , Prabhakar Raghavan , Andrew Tomkins, Anti-aliasing on the web, Proceedings of the 13th international conference on World Wide Web, May 17-20, 2004, New York, NY, USA
Stuart G. Stubblebine , Paul F. Syverson , David M. Goldschlag, Unlinkable serial transactions: protocols and applications, ACM Transactions on Information and System Security (TISSEC), v.2 n.4, p.354-389, Nov. 1999 | privacy;pseudonym;persistent relationship;janus function;mailbox;anonymity |
330390 | Strength of two data encryption standard implementations under timing attacks. | We study the vulnerability of two implementations of the Data Encryption Standard (DES) cryptosystem under a timing attack. A timing attack is a method, recently proposed by Paul Kocher, that is designed to break cryptographic systems. It exploits the engineering aspects involved in the implementation of cryptosystems and might succeed even against cryptosys-tems that remain impervious to sophisticated cryptanalytic techniques. A timing attack is, essentially, a way of obtaining some users private information by carefully measuring the time it takes the user to carry out cryptographic operations. In this work, we analyze two implementations of DES. We show that a timing attack yields the Hamming weight of the key used by both DES implementations. Moreover, the attack is computationally inexpensive. We also show that all the design characteristics of the target system, necessary to carry out the timing attack, can be inferred from timing measurements. | INTRODUCTION
An ingenious new type of cryptanalytic attack was introduced by Kocher in [Kocher
1996]. This new attack is called timing attack. It exploits the fact that cryptosystems
often take slightly different amounts of time on different inputs. Kocher gave
several possible explanations for this behavior, among these: branching and conditional
statements, RAM cache hits, processor instructions that run in non-fixed
time, etc. Kocher's most significant contribution was to show that running time
differentials can be exploited in order to find some of a target system's private infor-
mation. Indeed, in [Kocher 1996] it is shown how to cryptanalyze a simple modular
exponentiator. Modular exponentiation is a key operation in Diffie-Hellman's key
exchange protocol [Diffie and Hellman 1976] and the RSA cryptosystem [Rivest
et al. 1978]. A modular exponentiator is a procedure that on inputs k; n 2 N,
n). In the cryptographic protocols mentioned
above n is public and k is private. Kocher reports that if a passive eavesdropper can
measure the time it takes a target system to compute (y k mod n) for several inputs
y, then he can recover the secret exponent k. Moreover, the overall computational
effort involved in the attack is proportional to the amount of work done by the vic-
tim. For concreteness sake and clarity of exposition we now describe the essence of
Kocher's method for recovering the secret exponent of the fixed-exponent modular
exponentiator shown in Fig. 1.
Code:
be k in binary
For l down to 0 do
Output: z.
Fig. 1. Modular exponentiator.
The attack allows someone who knows k l \Delta to recover k t\Gamma1 . (To obtain the
entire exponent the attacker starts with repeats the attack until
The attacker first computes l iterations of the for loop. The next iteration
requires the first unknown bit k t\Gamma1 . If the bit is set, then the operation
n) is performed, otherwise it is skipped. Assume that each timing observation
corresponds to an observation of a random variable
is the time required for the multiplication and squaring steps corresponding to the
bit k l\Gammai and e is a random variable representing measurement error, loop overhead,
etc. An attacker that correctly guesses k t\Gamma1 may factor out of T the effect of
obtain an adjusted random variable of known variance (provided
the times needed to perform modular multiplications are independent from each
other and from the measurement error). Incorrect guesses will produce an adjusted
random variable with a higher variance than the one expected. Computing the
variance is easy provided the attacker collects enough timing measurements. The
correct guess will be identified successfully if its adjusted values have the smaller
variance.
Strength of Two Data Encryption Standard Implementations under Timing Attacks \Delta 3
In theory, timing attacks can yield some of a target system's private information.
In practice, in order to successfully mount a timing attack on a remote cryptosystem
a prohibitively large number of timing measurements may be required in order to
compensate for the increased uncertainty caused by random network delays. Nev-
ertheless, there are situations where we feel it is realistic to mount a timing attack.
We now describe one of them. Challenge-response protocols are used to establish
whether two entities involved in communication are indeed genuine entities and can
thus be allowed to continue communication with each other. In these protocols one
entity challenges the other with a random number on which a predetermined calculation
must be performed, often including a secret key. In order to generate the
correct result for the computation the other device must posses the correct secret
key and therefore can be assumed to be authentic. Many smart cards, in particular
dynamic password generators (tokens) and electronic wallet cards, implement
challenge-response protocols (e.g. the message authentication code generated according
to the ANSI X9.26 [Menezes et al. 1997, page 651] standard). It is expected
that extensive use will be made of smart cards based in general purpose programmable
integrated circuit chips. Thus, the specific functionality of each smart card
will be achieved through programming. The security of these smart cards will be
provided using tamper-proof technology and cryptographic techniques. The above
described scenario is an ideal setting in which to carry out a timing attack. The
widespread availability of a particular type of card will make it easy and inexpensive
to determine the timing characteristics of the system on which to mount the attack.
Later, the obtaining of precise timing measurements (e.g. by monitoring or altering
a card reader or by gaining possession of a card) could be used to retrieve some of
the secret information stored in the card by means of a timing attack. Thus, cards
that implement challenge-response protocols where master keys are involved could
give rise to a security problem. (See [Dhem et al. 1998] for a discussion of a practical
implementation of a timing attack against an earlier version of the CASCADE
smart card.)
New unanticipated strains of timing attacks might arise. Hence, timing attacks
should be given some serious consideration. This work contributes, ultimately, in
furthering our understanding of the strengths of this new cryptanalytic technique,
the weaknesses it exploits, and the ways of eliminating the possibility of it becoming
practical.
Kocher implemented the attack against the Diffie-Hellman key exchange proto-
col. He also observed that timing attacks could potentially be used against other
cryptosystems, in particular against the Data Encryption Standard (DES). This
claim is the motivation for this work.
2.
OF RESULTS AND ORGANIZATION
We study the vulnerability of one of the most widely used cryptosystems in the
world, DES, against a timing attack. The starting point of this work is the observation
of Kocher [Kocher 1996] that in DES's key schedule generation process
moving nonzero 28-bit C and D values using a conditional statement which tests
whether a one-bit must be wrapped around could be a source of non-constant encryption
running times. Hence, he conjectured that a timing attack against DES
Alejandro Hevia and Marcos Kiwi
could reveal the Hamming weight of the key. 1 We show that although Kocher's
observation is incorrect (for the DES implementations that we analyzed), his conjecture
is true. But, we do more.
In Sect. 3 we give a brief description of DES.
In Sect. 4.1 we describe a timing attack against DES that assumes the attacker
knows the target system's design characteristics. We first discuss experimental
results that show that a computationally inexpensive timing attack against two
implementations of DES could yield enough information to recover the Hamming
weight of the DES key being used. Hence, assuming the DES keys are randomly
chosen, an attacker can recover approximately 3:95 bits of key information. To the
best of our knowledge, this is the first implementation of a timing attack against a
symmetric cryptosystem. (Since the preliminary version of this work appeared two
timing attacks against RC5 have been reported [Handschuh and Heys 1999].) In
Sect. 4.1.1 we describe computational experiments that measure the threat implied
by an actual implementation of a timing attack against DES.
Recovering 3:95 bits of a DES key is a modest improvement over brute force
search. But, recovering the Hamming weight of the key is, potentially, more
threatening. In particular, an adversary can restrict attention to keys determined
to have either a significantly low or high Hamming weight. Although such keys may
be rare once the adversary determines that one such key is being used the ensuing
search may be significantly sped up. Thus, the adversary can balance the time
to find such rare keys with the time needed for key recovery. In some systems, even
the recovery of a single (although rare) key may be of serious concern.
In Sect. 4.1.2 we identify the sources of the dependencies between the encryption
time and the key's Hamming weight in the implementations of DES that we studied.
The most relevant are conditional statements.
In both DES implementations that we analyzed the encryption time T is roughly
equal to a linear function of the key's Hamming weight X plus some normally
distributed noise e. Since a DES key is a 56 bit long string and keys are chosen
uniformly at random in the key space, we have that X - Binom (56; 1=2). 2 Thus,
for some ff, fi, and oe,
In Sect. 4.2 we show that it is not necessary, in order to perform a timing attack
against DES, to assume that the design characteristics of the target system are
known. Indeed, we propose two statistical methods whereby a passive eavesdropper
can infer from timing measurements all the target system's design information
required to successfully mount a timing attack against DES. To the best of our
knowledge, this is the first proof that it is possible to infer a target system's design
characteristics through timing measurements.
We would like to stress that all of the timing attacks described in this work
only require precise measurements of encryption times but no knowledge of the
encrypted plaintexts or produced ciphertexts.
1 Recall that the Hamming weight of a bitstring equals the number of its bits that are nonzero.
2 Recall that the distribution Binom (N;p) corresponds to the distribution of the sum of N
independent identically distributed f0; 1g-random variables with expectation p.
Strength of Two Data Encryption Standard Implementations under Timing Attacks \Delta 5
In Sect. 5 we propose a "blinding technique" that can be used to eliminate almost
all of the execution time differentials in the analyzed DES implementations. This
blinding technique makes both DES implementations that we study impervious to
the sort of timing attack we describe in this work. Finally, we discuss under which
conditions all, and not only the Hamming weight of, a DES key might be recovered
through a timing attack.
2.1 Related Work
Modern cryptography advocates the design of cryptosystems based on sound mathematical
principles. Thus, many of the cryptosystems designed over the last two
decades can be proved to resist many sophisticated, mathematically based, cryptanalytic
techniques (provided one is willing to accept some reasonable assumptions).
Traditionally, the techniques used to attack such cryptosystems exploit the algorithmic
design weaknesses of the cryptosystem. On the other hand, timing attacks take
advantage of the decisions made when implementing the cryptosystems (specially
those that produce non-fixed running times). But, timing attacks are not the only
type of attacks that exploit the engineering aspects involved in the implementation
of cryptosystems. Indeed, recently Boneh, Lipton, and DeMillo [Boneh et al. 1997]
introduced the concept of fault tolerant attacks. These attacks take advantage of
(possibly induced) hardware faults. Boneh et al. point out that their attacks show
the danger that hardware faults pose to various cryptographic protocols. They conclude
that even sophisticated cryptographic schemes sealed inside tamper-resistant
devices might leak secret information.
A new strain of fault tolerant attacks, differential fault analysis (DFA), was proposed
by Biham and Shamir [Biham and Shamir 1997]. Their attack is applicable
to almost any secret key cryptosystem proposed so far in the open literature. DFA
works under various fault models and uses cryptanalytic techniques to recover the
secret information stored in tamper-resistant devices. In particular, Biham and
Shamir show that under the same hardware fault model considered by Boneh et al.,
the full DES key can be extracted from a sealed tamper-resistant DES encryptor
by analyzing between 40 and 200 ciphertexts generated from unknown but related
plaintexts. Furthermore, in [Biham and Shamir 1997] techniques are developed to
identify the keys of completely unknown ciphers sealed in tamper-resistant devices.
The new type of attacks described above have received widespread attention (see
for example [English and Hamilton 1996; Markoff 1996]).
3. THE
DES is the most widely used cryptosystem in the world, specially among financial
institutions. It was developed at IBM and adopted as a standard in 1977 [NBS
1977]. It has been reviewed every five years since its adoption.
DES has held up remarkably well against years of cryptanalysis. But, faster and
cheaper processors allow, using current technology, to build a reasonably priced
special purpose machine that can recover a DES key within hours [Stinson 1995,
pp. 82-83]. For concreteness sake, we provide below a brief description of DES. For
a detailed description see [NBS 1977]. More easily accessible descriptions of DES
can be found in [Schneier 1996; Stinson 1995].
DES is a symmetric or private-key cryptosystem, i.e., a cryptosystem where the
6 \Delta Alejandro Hevia and Marcos Kiwi
Key schedule
Encryption
Plaintext
Perm.
Perm.
Key
Fig. 2. DES encryption process.
parties that wish to use it must agree in advance on a common secret key which
must be kept private. DES encrypts a message (plaintext) bitstring of length 64
using a bitstring key of length 56 and obtains a ciphertext bitstring of length 64.
It has three main stages. In the first stage the bits of the plaintext are permuted
according to a fixed initial permutation. In the second stage 16 iterations of a
certain function are successively applied to the bitstring resulting from the first
stage. In the final stage the inverse of the initial permutation is applied to the
bitstring obtained in the second stage.
The strength of DES resides on the function that is iterated during the encryption
process. We now give a brief description of this iteration process. The input to
iteration i is the output bitstring of iteration long string, K i .
Actually, each K i is a permuted selection of bits from the DES key. The strings
what is called the key schedule. During each iteration a 64
bit long output string is computed by applying a fixed rule to the two input strings.
The encryption process is depicted in Fig. 2.
Decryption is done with the same encryption algorithm but using the key schedule
in reverse order K
The best traditional cryptanalytic attacks known against DES are due to Biham
and Shamir [Biham and Shamir 1991; Biham and Shamir 1993] and Matsui [Matsui
1994a; Matsui 1994b]. However, they are not considered a threat to DES in practical
environments (see [Menezes et al. 1997, pp. 258-259]).
4. TIMING ATTACK OF DES
We now consider the problem of recovering the Hamming weight of the DES key
of a target system by means of a timing attack. We first address the problem, in
Sect. 4.1, assuming the attacker knows the design of the target system. We then
show, in Sect. 4.2, that this assumption can be removed.
4.1 Timing Characteristics of Two Implementations of DES
We studied the timing characteristics of two implementations of DES. The first
one was obtained from the RSAEuro cryptographic toolkit [Kapp 1996], henceforth
referred to as RSA-DES. The other implementation of DES that we looked at was
one due to Louko [Louko 1992], henceforth referred to as L-DES. We studied both
implementations on a 120-MHz Pentium TM computer running MSDOS TM . The
advantage of working on an MSDOS TM environment is that it is a single process
operating system. This facilitates carrying out timing measurements since there are
Strength of Two Data Encryption Standard Implementations under Timing Attacks \Delta 7
50200300400500Hamming weight of key
Time
Encryption
Key Schedule
Fig. 3. RSA-DES.
5050150250350Hamming weight of key
Time
Key Schedule
Encryption
Fig. 4. L-DES.
no other interfering processes running and there are less operating system maintenance
tasks being performed. We measured time in microseconds (-s).
In our first experiment we fixed the input message to be the bitstring of length
all of whose bits are set to 0. For each we randomly chose
keys of Hamming weight i. For each selected key we encrypted the message a total
of times. During each encryption we measured the time it took to generate the
schedule and the total time it took to encrypt the message. The plots, for each
of the implementations that we looked at, of the average (for each key) encryption
and key schedule generation times are shown in Fig. 3 and Fig. 4.
Only obvious outliers were eliminated. In fact the only outliers that we noticed
appeared at fixed intervals of 2 clock ticks. These outliers were caused by system
maintenance tasks.
Alejandro Hevia and Marcos Kiwi
A randomly chosen DES key has a Hamming weight between 23 and 33 with
probability approximately 0:86. Thus, the most relevant data points shown in
Fig. 3 and Fig. 4 are those close to the middle of the plots.
For various keys chosen at random we performed 2 (for each
key) of the encryption and key schedule generation times. After discarding obvious
outliers we graphed the empirical frequency distributions of the collected data. The
empirical distributions we observed were roughly symmetric and concentrated in a
few contiguous values (usually three or four). This concentration of values is due to
the fact that we were only able to perform time measurements with an accuracy of
0:8381-s and that time differentials among encryptions performed under the same
were rarely larger than 3:0-s. (For an explanation of how to measure time with
this precision on an MSDOS TM environment see Appendix A). The above suggests,
as one would expect, that the variations on the running time observed when the
same process is executed many times over the same input are due to the effect of
normally distributed random noise.
For different values of we randomly chose 2 8 keys of Hamming
weight i. After throwing away outliers we graphed the empirical frequency distributions
of the collected data. The empirical frequencies observed looked like
normal distributions with small deviations (typically 1:2-s for L-DES and 1:8-s
for RSA-DES). We conclude that the variations on the encryption and key schedule
generations times observed among keys of same Hamming weight are mostly due to
the total number of bits of the key that are set and not by the position where these
set bits occur. Thus, the effect of which bits are set among keys of same Hamming
weight is negligible.
We repeated all the experiments described so far but instead of leaving the input
message fixed we chose a new randomly selected message at the start of each encryption
process. All the results reported above remained (essentially) unchanged.
There was only a negligible increase in the measured deviations.
Assuming that the attacker knows the design of the target system, he can build
on his own a table of the average encryption time versus the Hamming weight of
the key. The clear monotonically increasing relation between the encryption time
and the Hamming weight of the key elicited by our experiments is a significant
implementation flaw. It allows an attacker to determine the Hamming weight of the
DES key. Indeed, the attacker has to obtain a few encryption time measurements
and look in the table he has built to determine the key's Hamming weight from
which such time measurements could have come. Thus, the attacker can recover
bits of key information (H denotes the binary entropy function).
Remark 1. A precise estimation of the Hamming weight of the DES key can be
achieved by means of a timing attack if two situations hold. First, accurate time
measurements can be obtained. Second, the variations in the encryption and key-schedule
generation time produced by different keys with identical Hamming weight
is small compared to the time variations produced by keys with one more or one less
set bit. We have noticed that the latter situation approximately holds. An exact
estimation of Hamming weight of the DES key can be achieved if the attacker can
accurately perform time measurements of several encryptions of the same plaintext.
But, this requires a more powerful attacker, one that should be capable of fixing the
Strength of Two Data Encryption Standard Implementations under Timing Attacks \Delta 9
Where - is the time it takes DES to generate ciphertext C from message M
Code: For
Let l be such that
g.
Randomly choose m in g.
For
Let K be the (m lexicographically first elem. of K l .
If (DES encryption of M under key K yields C) then return(K).
Fig. 5. Key recovery procedure based on a timing attack that reveals the Hamming weight of
the key.
input message fed into the encryption process.
More remarkable than the established monotonically increasing relation between
the encryption times and the Hamming weight of the key is the linear dependency
that exists between the two measured quantities. The correlation factors for the
data shown in Fig. 3 and Fig. 4 are 0:9760 and 0:9999 respectively. The sharp linear
dependency between encryption times and Hamming weight allows an attacker to
infer the target system's information which is required to carry out the attack
described above. This topic is discussed in the next section.
4.1.1 Experimental Results. In this section we describe a computational experiment
that shows the expected reduction in the size of the key space search that
would be achieved by the implementation of the timing attack described in the
previous section.
Assume that for every we have a T i 2 R corresponding to the
expected time it takes the target DES implementation to encrypt a message with
a key of Hamming weight i. Furthermore, assume that T supported
by our experimental observations). Consider the procedure of Fig. 5 for recovering
the DES encryption key through a timing attack that exploits the facts reported
in Sect. 4.1. Note that it is possible to experimentally determine the expected
number of keys that this procedure would try without having to actually execute it.
Indeed, if the DES encryption of plaintext M under key K generates the ciphertext
C in -s, then the expected size of the key space searched by the given procedure is,
For both DES implementations we studied we randomly chose DES message/key
pairs, measured the encryption time, and computed the expected number of keys
that the procedure of Fig. 5 would have tried before finding the correct encryption
key. From our discussion of Sect. 4.1 it follows that the best that one can hope
for is to have to try half of the keys whose Hamming weight equals that of the
correct encryption key. This corresponds to 3:24 percent of all the key space, since
k=0 pk log 2 pk =2. We found that for RSA-DES 5:30
3 In the (unlikely) event that l is not uniquely defined, perturb - by a value uniformly chosen in
the interval [\GammaOE; OE], where OE is tiny compared to the precision of the timing measurements.
Alejandro Hevia and Marcos Kiwi
26 28
Table
1. Results of computational experiment.
percent of the key space would have been searched, in average, before finding the
correct encryption key. For L-DES, the percentage goes down to 3:84 percent.
Table
1 shows in more detail some of the data collected in our experiments.
Columns are labeled according to the weight of the DES key. We denote the weight
of a key by k. The second row represents the percentage of the total key space
corresponding to DES keys of Hamming weight k (with a precision of 0:0005). We
denote this value by p k . For each DES key of weight k we estimated (16000
times) the expected percentage of the key space that would have been searched
before finding the encryption key. Each of theses estimates was based on 16000 \Delta p k
measurements in order to insure that at least 16 measurements were considered for
every estimate associated to nonzero p k 's. The last two rows of Table 1 show, for
each DES implementation and some key weights, the average of the values obtained.
Recovering 3:95 bits of a DES key gives a modest improvement in the time
needed to recover the key. But, Table 1 implies that a timing attack that reveals
the Hamming weight of the key is potentially more threatening. In particular, an
adversary can restrict attention to keys determined to have either a significantly
low or high Hamming weight. The adversary can do this by performing timing
measurements until one is found to be either significantly low or high. Once the
adversary detects such a rare key the subsequent key search can be much less than
the usual amount. Thus, the adversary can balance the time to find such rare keys
with the time needed for key recovery. In some systems the recovery of even a
single may cause total disruption and/or forward vulnerability.
4.1.2 Sources of the dependency between DES encryption time and key's Hamming
weight. The key schedule generation in L-DES is carried out by a procedure
called des set key. This procedure computes the resulting key schedule bitstring
by performing a bitwise OR with some pre-computed constants. For each bit of the
key, such bitwise or's are computed if and only if the key bit is set. For that purpose,
it uses a piece of code of the following form: If (condit) then
else instr. The number of times condit is true turns out to be exactly the Hamming
weight of the DES key. This is the main source of running time differentials
in L-DES.
In RSA-DES's key schedule generation code there is also a procedure that contains
two conditional statements. These conditional statements are used in the
computation of the subkeys. More precisely, they implement a fixed permutation
Strength of Two Data Encryption Standard Implementations under Timing Attacks \Delta 11
PC2 of some bits of the key. Their code is of the following form: If (condit) then
instr. The total number of times condit is true is equal to the sum of the Hamming
weight of all subkeys. Thus, the number of times instr is executed is directly
proportional to the Hamming weight of the DES key.
As mentioned in Sect. 2, Kocher [Kocher 1996] conjectured that in DES's key
schedule the rotation of nonzero bits using conditional statements could give rise
to running time differentials. In the implementations of DES we analyzed we found
no evidence to support this conjecture.
Finally, note that it is clear from Fig. 4 that in L-DES there is a source of non-fixed
running times which does not depend on the key schedule generation process.
This is evidenced by the non-constant distance between the two curves shown in
Fig. 4. The source of these time differentials is not due to conditional statements.
We were not able to identify the cause of this dependency nor able to exploit it in
order to recover all of the DES key.
4.2 Derivation of the Timing Characteristics of the Target System
As discussed in Sect. 4.1, in both DES implementations that we studied the encryption
time was roughly equal to a linear function of the key's Hamming weight
plus some normally distributed random noise. In this section we exploit this fact
in order to derive all the necessary information needed to perform a timing attack
that reveals the Hamming weight of the target system's DES key.
First we need to introduce some notation. Assume we have m measurements on
the time it takes the target system to perform a DES encryption. The time measurements
might correspond to encryptions performed under different DES keys.
For g, denote by K i the i-th key that is used by the target system during
the period that timing measurements are performed. We make the (realistic)
assumption that K are chosen at random in f0; 1g 56 and independent of
each other. Let X (i) denote the Hamming weight of key K i . Thus, the distribution
of X (i) is a Binom (56; 1=2). Since we are assuming that the K i 's are chosen independently
we have that X are independent random variables. Note
that successive time measurements can correspond to encryptions of the message
under the same key. For be the index of the
last measurement corresponding to an encryption performed with key K i . For con-
venience's sake, let - m. Denote by
I i the set of indices that correspond to time measurements under key K i , i.e. for
g. For
let T (i)
j be the random variable representing the time it takes the target system to
perform the j-th encryption of the message with key K i . Finally, for
be a random variable representing the effect of random noise on the j-th encryption
with key K i . Thus, the e (i)
inaccuracies and the target
system's running time fluctuations.
We now have all the notation necessary to formally state the problem we want
to address. Indeed, the linear dependency between the encryption time and the
Hamming weight of the key in both DES implementations that we studied implies
Alejandro Hevia and Marcos Kiwi
that there exists ff, fi, and oe, such that for all
Our problem is to infer from timing measurements the parameters ff, fi, and oe for
which (1) holds. We address two variations of this problem. In Sect. 4.2.1 we show
how to deal with the case where the - i 's are known. In Sect. 4.2.2 we show how to
handle the case where the - i 's are unknown. The former case is the most realistic
one. Indeed, a standard cryptanalytic assumption is that the attacker knows the
management procedure of the target system.
4.2.1 Known - 0
We propose two alternative statistical methods for deducing
the parameters ff, fi, and oe for which (1) holds. One method is based on maximum
likelihood estimators and the other one on asymptotically unbiased estimators.
Since the following discussion heavily relies on standard concepts and results from
probability and statistics we refer the reader unfamiliar with these subjects to [Feller
1966; Ross 1988; Zacks 1971] for background material and terminology.
Maximum Likelihood Estimators: Let
. Thus, X;T
more, let
be the actual values taken
by X, T (i) , and T respectively.
oe) be the marginal distribution of T given ff, fi, and oe. For a
fixed collection of time measurements t the values of ff, fi, and oe that maximize
are the maximum likelihood estimators we are looking for. The maximum
likelihood estimators are the values most likely to have produced the observed
time measurements. They can also be regarded as the values minimizing the loss
This explains why maximum likelihood estimators are
thought to be good predictors. Thus, in order to determine good estimators for ff,
fi, and oe we first compute f T (t; ff; fi; oe).
Proposition 1. The marginal distribution of T given ff, fi, and oe is
Y
Proof. Let f X;T (\Delta; ff; fi; oe), f T=X=x (\Delta; ff; fi; oe) and f X (\Delta; ff; fi; oe) denote the joint
density function of X and T , the density function of T given and the
probability distribution of X respectively. For convenience's sake, we henceforth
omit ff, fi, and oe from the expressions for f X;T , f T=X=x , and f X .
Observe that the independence of the X (i) 's and e (i)
's imply that the T (i) 's are
independent. Thus, the joint density function of X and T given ff, fi, and oe is
Y
where the last equality follows since the X (i) 's are independent and the T (i) 's are
independent.
Strength of Two Data Encryption Standard Implementations under Timing Attacks \Delta 13
From (1) we get that T (i)
like a Norm
Moreover, for fixed i, the T (i)
j 's are independent random variables. Hence,
Y
we know that f X (i)
. Thus,
Y
The marginal distribution of T given ff, fi, and oe equals the sum, over all values
taken by x, of f X;T (x; t). Hence,
Y
The conclusion follows directly from the previous equality and the fact that X (i) -
Binom (56; 1=2).
For a given t the values of ff, fi, and oe that maximize the right hand side of the
expression in Proposition 1 are the maximum likelihood estimators sought. As is often
the case when dealing with maximum likelihood estimators it is difficult to solve
explicitly for them. (See [Zacks 1971, Ch. 5, x2] for a discussion of computational
routines that can be used to calculate maximum likelihood estimators.)
The advantage of the above described approach for determining the parameters
relevant for carrying out the timing attack is that it uses all the available timing
measurements. But, it does not allow us to determine how many measurements are
sufficient in order to obtain accurate estimations of the parameters sought. The
alternative approach described below solves this problem.
Asymptotic Estimators: Our goal is to find good estimators b ff, b
fi, and b oe for
ff, fi, and oe. Moreover, we are interested in determining the asymptotic (on the
number of timing measurements) behavior of such estimators. In particular, their
asymptotic distributions, their limiting values, and their rate of convergence.
We will now derive good predictors for ff, fi, and oe. We start with a key observa-
tion. Since the expectation and variance of a Binom (56; 1=2) are 28 and 14 respec-
tively, taking the expectation and variance in (1) yields that for all
28
Hence, if we knew - T , oe 2
T , and oe 2 we could solve for ff and fi in (2). This suggests
that if we can find good estimators for - T , oe 2
T , and oe 2 , then we can derive good
estimators for ff and fi. We now provide candidates for c
T , and c oe 2 , the estimators
T , and oe 2 respectively. But, we first need to introduce additional
14 \Delta Alejandro Hevia and Marcos Kiwi
notation. Let
e (i)
c
Solving for ff and fi in (2) yields that the two natural candidates for b
ff and b
fi,
the estimators for ff and fi, are
We now prove that b
ff is well defined.
Proposition 2. c
Proof. Just note that
c
We henceforth denote a chi-square distribution with l degrees of freedom by - 2
l .
Proposition 3. If jI the distribution of c
Proof. Since T (i)
j , we have that T (i)
Since e (i) is the average of n independent Norm
\Delta . In addition, the de Moivre-Laplace Theorem [Hazewinkel 1988,
pp. 397] states that the Binom (m; p) distribution can be expressed in terms of
the standard normal distribution. Moreover, if m ! 1, then such an expression
is exact, and if mp(1 \Gamma p) - 10, then the expression provides a good approximation
of the Binomial distribution [Ross 1988, pp. 170-171]. Thus, since X (i) -
Binom (56; 1=2), the distribution of X (i) is well approximated by a Norm (28; 14).
Hence, since X (i) is independent of e (i) and the sum of independent normal distributions
is a normal distribution, it follows that T (i) is approximately distributed as
a Norm
. The desired conclusion follows from a classical statistics
result [Hogg and Tanis 1997, Theorem 5.3.4] and Proposition 2.
Proposition 4. If jI negligible, then
converges (in distribution) 4 to a Norm
small constant error term
when k !1.
4 Recall that when X are random variables on some probability
Strength of Two Data Encryption Standard Implementations under Timing Attacks \Delta 15
Proof. First, note that if we neglect oe 2 =n then Proposition 2 and our definition
of b ff imply that the distribution of b
.
Second, recall that the sum of the squares of l independent identically distributed
normal random variables with zero mean and variance equal to 1 is distributed
according to a - 2
l . Equivalently, the sum of l independently distributed - 2
variables is distributed according to a - 2
l . Hence, since the expectation and variance
of a - 2
random variable are 1 and 3 respectively, the Central Limit Theorem implies
that
(in distribution) to a Norm (0; 3).
Putting the two observations together shows that
distribution) to a Norm
small constant term when k !1. The
stated result follows immediately.
Theorem 1. If jI negligible and k is sufficiently
large, then the distribution of b
ff is (approximately) a Norm
Proof. The Law of Large Numbers implies that c
T and c
(almost
surely) 5 to oe 2
T and oe 2 respectively. Hence, by continuity, b
converges (almost surely) to
1. This fact and
Proposition 4 yield that if k !1, then
ff+ff converges (in dis-
tribution) to a Norm
small error term. The desired conclusion
immediately.
Remark 2. Theorem 1 provides an approximation to the distribution of b
ff. The
approximation error arises from three sources. The first one is the use of the de
Moivre-Laplace Theorem to express a Binom (56; 1=2) in terms of a Norm (28; 14).
The second one is due to the use of the Central Limit Theorem to approximate the
distribution of an estimator by its limit distribution. The final source of error
is due to the use of the Law of Large Numbers to approximate an estimator by
its asymptotic value. These three sources of approximation error can be bounded
through the de Moivre-Laplace Theorem, Berry-Essen's inequality[Hazewinkel 1988,
pp. 369], and Chebyshev's inequality[Ross 1988, pp. 337] respectively. A bound on
the accumulated approximation error shows that Theorem 1 is fairly accurate.
Corollary 1. If jI negligible and k is sufficiently
large, then
Proof. The bound concerning b
ff follows from Theorem 1 and Chebyshev's in-
equality[Ross 1988, pp. 337]. In order to prove the other bound recall that b
said that Xn converges in distribution to X as n !1, if P[
all points x at which FX
5 Recall that when X are random variables on some probability
said that converges almost surely to X as
is an event whose probability is 1.
Alejandro Hevia and Marcos Kiwi
c
ff and
where the last inequality is a consequence of applying Chebyshev's inequality twice.
Note that from Theorem 1 we have that V[ b
=n). The result follows.
Corollary 1 tells us that with probability at least suffices to take n time
measurements for each of 1
different keys to approximate ff and fi to within
a multiplicative factor of (1 \Sigma ffl).
4.2.2 Unknown - 0
s. The assumption that the - i 's are known made in the previous
section is not strictly necessary since an attacker may alternate between performing
several timing measurements over a short period of time and resting for an appropriately
long period of time. Hence, the problem of deducing the target system's
design characteristics reduces to the case in which the - i 's are known provided that
the keys are not changed too often and the attacker's resting period is longer than
a key's lifetime. (Changing keys too often creates a key management problem for
the cryptosystem's user. Thus, it is reasonable to assume that a key's lifetime is
not excessively short.)
We now discuss another approach for handling the case of unknown - i 's under
the assumption that the attacker has access to several identical copies of the target
system, e.g., several copies of a smart card supporting a DES based challenge-response
protocol. Lets make the reasonable assumption that the target system's
are independently generated. In this case the attacker may perform, over
a short period of time, several timing measurements for each copy of the target
system. If the key's are not changed too often the attacker can deduce the target
system's relevant timing characteristics as in Sect. 4.2.1. Indeed, the attacker can
assume that all the timing measurements arising from the same copy of the system
come from encryptions performed under the same key. Since keys corresponding
to different copies of the target system are independently generated and the copies
of the system are identical, the problem of deducing the target system's design
characteristics reduces to the case in which the - i 's are known.
Tests of statistical hypothesis give rise to another alternative for handling the case
of unknown - i 's. Indeed, consider the situation in which an attacker determines m
timing measurements t arising from random variables satisfying (1). Assume
keys are not changed too often, i.e., at least n - m timing measurements
come from encryptions performed under the same key. Thus, for each j such that
the attacker can perform a test of equality of two normal distri-
butions[Hogg and Tanis 1997, pp. 372-385] on the samples of t j
. The significance level of such tests allows the attacker to determine
the measurements around where a change of key occurs. Discarding the measurements
around where the attacker suspects a change of key occurs yields a sequence
Strength of Two Data Encryption Standard Implementations under Timing Attacks \Delta 17
of timing measurements from which the target system's design characteristics can
be deduced as in the case of known - i 's.
5. FINAL COMMENTS
In [Kocher 1996] a "blinding technique" similar to that used for blind signatures
[Chaum 1983] is proposed in order to prevent a timing attack against a modular
exponentiator. For both implementations of DES we studied, blinding techniques
can be adapted to produce (almost) fixed running time for the key schedule generation
processes. Indeed, let K be the DES key of Hamming weight wt(K) whose key
schedule we want to generate. Let K 0 be a bitstring of length 56 generated as fol-
lows: randomly choose b wt(K)c (respectively d 56\Gammawt(K)e) of the bits of K which are
set to 1 (respectively 0) and set the corresponding bits of K 0 to 0 (respectively 1).
Denote the bitwise xor of K and K 0 by K \PhiK 0 . Note that wt(K 0
when the Hamming weight of K is even, and wt(K 0 28 and wt(K \PhiK 0
the Hamming weight of K is odd. Modify the key schedule generation processes so
schedules for keys K 0 and K \Phi K 0 are generated. Note that the work required
for this is independent of the Hamming weight of K. Hence, no sources of non-fixed
running time are introduced during this step. Let K 0
the key schedules obtained. Recall that K i (respectively K 0
permuted selection
of bits from the key K \Phi K 0 (respectively K 0 ). Thus, the key schedule of
K is
Figure
6 plots the encryption times of RSA-DES
as previously explained. Note the very clear reduction in time differentials. The
reduction is achieved at the expense of increasing the encryption time by a factor
of approximately 1:6. Unfortunately, this blinding technique still leaks the parity
of the weight of the original DES key, i.e., 1 bit of information. careful look at
Fig. 6 confirms this fact). This fact can be fixed using the idea developed above.
Indeed, for a given DES key K one can generate three DES
two of them with Hamming weight 28 and one with Hamming weight 27. Of the
three keys one will be spurious meaning that its key schedule should be generated
and the results discarded. The xor of the key schedules generated by the two non-
spurious keys will give rise to the key schedule sought. When the Hamming weight
of the original DES key is even (respectively odd) the spurious key will be the one
of Hamming weight 27 (respectively one of the keys of Hamming weight 28).
We have seen that the main source of non-fixed running times were caused by
the key schedule generation procedure. In many fast software implementations
key setup is an operation which is separated from encryption. This would thwart a
timing attack if encryption time is constant. But, in several systems it is impractical
to precompute the key schedule. For example, in smart cards pre-computations are
undesirable due to memory constraints.
both DES implementations we studied are fairly resistant to a timing
attack. This leads us to the question of whether a timing attack can find all of the
DES key and not only its Hamming weight. Although we did not succeed in tuning
the timing attack technique in order to recover all the bits of a DES key, we identified
in L-DES a source of non-fixed running time that is not due to the key generation
process. Indeed, the difference in the slopes of the curves plotted in Fig. 4 shows
that the encryption time, not counting the key generation process, depends on the
used. This fact is a weakness that could (potentially) be exploited in order to
Alejandro Hevia and Marcos Kiwi
50400500600700Hamming weight of key
Time
Blinded RSA-DES
RSA-DES
Fig. 6. RSA-DES and modified RSA-DES encryption times.
recover all of the DES key. It opens the possibility that the time it takes to encrypt
a message M with a key K is a non-linear function of both M and K, e.g., it is a
monotonically increasing function in the Hamming weight of M \Phi K. This would
allow a timing attack to recover a DES key by carefully choosing the messages to be
encrypted. We were not able to identify clear sources of non-linear dependencies
between time differentials and the inputs to the DES encryption process in either
of the DES implementations that we studied. Nevertheless, we feel that the partial
information leaked by both implementations of DES that we analyzed suggests that
care must be taken in the implementation of DES, otherwise, all of the key could
be compromised through a timing attack.
ACKNOWLEDGMENTS
We are grateful to Shang-Hua Teng for calling to our attention the work of Kocher.
We thank Raul Gouet, Luis Mateu, Alejandro Murua, and Jaime San Martin for
helpful discussions. We also thank Paul Kocher for advise on how to measure running
times accurately on an MSDOS TM environment. Finally, we thank an anonymous
referee for pointing out that the blinding technique of Sect. 5 was leaking
information about the parity of the Hamming weight of the key.
A.
Standard C routines allow to measure time events in an MSDOS TM environment
with an accuracy of 54:9254-s [Heidenstrom 1995]. In order to measure with a time
precision of 0:8381-s on a Pentium TM computer running MSDOS TM we followed
Kocher's advice [Kocher 1997]. He suggested reading the value of a high-precision
timer by accessing port 64. Whenever this timer overflows to 65536 it generates
one interrupt. Interrupts occur once every 54925:4-s. Hence, one can measure time
intervals with a precision of 54925:4-s=65536 - 0:8381-s. It is also a good idea
to run from a RAM disk. For more information on how to perform accurate time
Strength of Two Data Encryption Standard Implementations under Timing Attacks \Delta 19
measurements on the PC family under DOS the reader is referred to [Heidenstrom
1995]. Performing the timing measurements in a Windows or Unix environment is
clearly a bad idea since both of them are multi-process operating systems.
--R
Differential cryptanalysis of DES-like cryptosystems
Journal of Cryptology
Differential cryptanalysis of the full 16-round DES
On the importance of checking cryptographicprotocols for faults.
Blind signatures for untraceable payments.
A practical implementation of the timing attack.
New directions in cryptography.
Network security under siege.
An introduction to probability theory and its applications
A timing attack on RC5.
Encyclopaedia of Mathematics
FAQ/application notes: Timing on the PC family under DOS.
Probability and statistical inference (fifth
RSAEuro: A cryptographic toolkit.
Timing attacks on implementations of Diffie-Hellman
Private communication.
DES package.
Potential flaw seen in cash card security.
The first experimental cryptanalysis of the data encryption standard.
Linear cryptanalysis method for DES cipher.
A method for obtaining digital signatures and public-key cryptosystems
A first course in probability (third
Applied Cryptography: protocols
The theory of statistical inference.
--TR
Linear cryptanalysis method for DES cipher
Applied cryptography (2nd ed.)
Network Security Under Siege
A method for obtaining digital signatures and public-key cryptosystems
Cryptography
A Timing Attack on RC5
A Practical Implementation of the Timing Attack
Differential Cryptanalysis of the Full 16-Round DES
The First Experimental Cryptanalysis of the Data Encryption Standard
Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems
--CTR
Ruggero Lanotte , Andrea Maggiolo-Schettini , Simone Tini, Information flow in hybrid systems, ACM Transactions on Embedded Computing Systems (TECS), v.3 n.4, p.760-799, November 2004 | cryptography;data encryption standard;timing attack;cryptanalysis |
330445 | Distributed Path Reservation Algorithms for Multiplexed All-Optical Interconnection Networks. | AbstractIn this paper, we study distributed path reservation protocols for multiplexed all-optical interconnection networks. The path reservation protocols negotiate the reservation and establishment of connections that arrive dynamically to the network. These protocols can be applied to both wavelength division multiplexing (WDM) and time division multiplexing (TDM) networks. Two classes of protocols are discussed: forward reservation protocols and backward reservation protocols. Simulations of multiplexed two-dimensional torus interconnection networks are used to evaluate and compare the performance of the protocols and to study the impact of system parameters, such as the multiplexing degree and the network size, speed, and load, on both network throughput and communication delay. The simulation results show that, in most cases, the backward reservation schemes provide better performance than their forward reservation counterparts. | Introduction
With the increasing computation power of parallel
computers, interprocessor communication has become
an important factor that limits the performance of supercomputing
systems. Due to their capabilities of
offering large bandwidth, optical interconnection net-
works, whose advantages have been well demonstrated
on wide and local area networks (WAN and LAN)
[1, 6], are promising networks for future supercomputers
Directly-connected networks, such as meshes, tori,
rings and hypercubes, are commonly used in commercial
supercomputers. By exploiting space diversity
and traffic locality, they offer larger aggregate
throughput and better scalability than shared media
networks such as buses and stars. Optical direct
networks can use either multi-hop packet routing
(e.g.Shuffle Net [2]), or deflection routing [4].
The performance of packet routing is limited by the
speed of electronics since buffering and address decoding
usually requires electronic-to-optics and optics-to-
electronic conversions at intermediate nodes. Thus,
This work was supported in part by NSF awards CCR-
9157371 and MIP-9633729.
packet routing cannot efficiently utilize the potentially
high bandwidth that optics can provide. While deflection
routing requires simple network nodes and minimal
buffering, a mechanism is necessary to guarantee
bounded transfer delays within the network. As
pointed out in [1], although direct optical networks
have intrinsically high aggregate throughput, this advantage
comes at the expense of additional control
complexity in the form of routing and congestion con-
trol. New solutions should exploit the inherent flexibility
of dynamic reconfiguration of logical topologies.
In order to fully explore the potential of optical
communication, optical signals should be transmitted
in a pure circuit-switching fashion in the optical
domain. No buffering and optical-to-electronic
or electronic-to-optical conversions should be needed
at intermediate nodes. Moreover, multiplexing techniques
should be used to fully utilize the large band-width
of optics and to provide multiple virtual channels
on each communication link. Two techniques
can be used for multiplexing optical signals on a
fiber-optics link, namely time-division multiplexing
(TDM) [7, 9, 12] and wavelength-division multiplexing
(WDM) [5, 6, 15]. In TDM, a link is multiplexed by
having different virtual channels communicate in different
time slots, while in WDM, a link is multiplexed
by having different virtual channels communicate using
different wavelengths.
Regardless of the multiplexing technique, two approaches
can be used to establish connections in multiplexed
networks, namely link multiplexing (LM) and
path multiplexing (PM). In LM, a connection which
spans more than one communication link is established
by using possibly different channels on different
links. In PM, a connection which spans more than
one communication link uses the same channel on all
the links. In other words, PM uses the same time-slot
or the same wavelength on all the links of a connec-
tion, while LM can use different time-slots or different
wavelengths, thus requiring time-slot interchange or
wavelength conversion capabilities at each intermediate
node.
Centralized control mechanisms for wavelength assignment
[11] or time slot assignment [12] in multiplexed
networks, are not scalable to large networks.
It is, therefore, essential to develop distributed path
reservation protocols for all-optical communication in
Figure
1: Path multiplexing in a linear array
large scale multiplexed networks. Such protocols are
studied in this paper. For simplicity of the presen-
tation, the protocols will be presented for path multi-
plexing. Similar, and in fact somewhat simpler, protocols
can be designed for link multiplexing by removing
the restriction that the same virtual channel should be
used on all the links forming a connection.
Two types of protocols are considered and evaluated
in the following sections, namely forward reservation
and backward reservation protocols. These
protocols are generalizations of control protocols in
non-multiplexed circuit-switching networks [10]. Mul-
tiplexing, however, introduces additional complexity
which requires a careful consideration of many factors
and parameters that affect the efficiency of the protocols
Most studies on all-optical multiplexed networks
assume virtual channel assignments [3, 11], but only
a few works consider the on-line control mechanisms
needed to find these assignments. In [12], a distributed
control algorithm to establish connections in multiplexed
multistage networks is proposed. In [14], the
performances of PM and LM are compared while taking
into consideration the signaling overhead in the
protocols. The protocols described in the above works
fall into the forward reservation category. Backward
reservation schemes for multiplexed networks have not
been described and evaluated before.
The rest of the paper is organized as follows. In
section 2, we discuss the problem of path reservation
in multiplexed networks. In Section 3 and 4 we discuss
the distributed control protocols. In Section 5,
we present the results of the simulation study and in
Section 6, we conclude the paper.
Path Reservation in Multiplexed
Networks
We consider directly-connected networks consisting
of switches with a fixed number of input and output
ports. All but one input port and one output port are
used to interconnect with other switches, while one
input port and one output port are used to connect to
a local processing element.
We use
Figures
1 and 2 to illustrate path multiplexing
in TDM networks, where two virtual channels
are used on each link by dividing the time domain
into time slots, and using alternating time slots for
the two channels c0 and c1. Figure 1 shows four established
connections using the two channels, namely
connections (0; 2) and (2; 1) that are established using
channel c0, and connections (2; 4) and (3; 2) that
are established using channel c1, where (u; v) is used
to denote a connection from node u to node v. The
switches are globally synchronized at time slot bound-
aries, and each switch is set to alternate between the
I3
O2
O3 I1
O2
O3
I3
(a) Time slot 0 (b) Time slot 1
Figure
2: Changing the state of a switch in TDM
two states that are needed to realize the established
connections. For example, Figure 2 shows the two
states that the 3 \Theta 3 switch attached to processor 2
must realize for the establishment of the connections
shown in Figure 1. Note that each switch can be an
electro-optical switch (Ti:LiNbO 3
switch, for example
[8]) which connects optical inputs to optical outputs
without optical/electronic conversion.
The duration of a time slot is typically equal to the
duration of several hundred bits. For synchronization
purposes, a guard band at each end of a time slot must
be used to allow for changing the state of the switches
and to accommodate possible drifting or jitter. For
example, if the duration of a time slot is 276ns, which
includes a guard band of 10ns at each end, then 256ns
can be used to transmit data. If the transmission rate
is 1Gb=s, then a packet of 256 bits can be transmitted
during each time slot. Note that the optical transmission
rate is not affected by the relatively slow speed of
changing the state of the switches (10ns:) since that
change is performed only every 276ns.
Figure
can also be used to demonstrate the establishment
of connections in WDM networks, where two
different wavelengths are used for the two channels. In
such networks, each switch should have the capability
to switch signals with different wavelengths indepen-
dently. Moreover, transmitters and receivers at each
node should be tunable to any of the two wavelengths
used to implement the channels. Alternatively, two
transmitters and two receivers may be used at each
node for the different wavelengths.
In order to support a distributed control mechanism
for connection establishment, we assume that, in
addition to the optical data network, there is a logical
shadow network through which all the control messages
are communicated. The shadow network has
the same physical topology as the data network. The
traffic on the shadow network, however, consists of
small control packets, and thus is much lighter than
the traffic on the data network. The shadow network
operates in packet switching mode; routers at intermediate
nodes examine the control packets and update
local bookkeeping information and switch states ac-
cordingly. The shadow network can be implemented as
an electronic network, or alternatively, a virtual channel
on the data network can be reserved exclusively
for exchanging control messages. We also assume that
a node can send or receive messages through different
virtual channels simultaneously.
A path reservation protocol ensures that the path
from a source node to a destination node is reserved
before the connection is used. There are many options
for path reservation which are discussed next.
ffl Forward reservation versus backward reservation.
Locking mechanisms are needed by the distributed
path reservation protocols to ensure the exclusive usage
of a virtual channel for a connection. This variation
characterizes the timing at which the protocols
perform the locking. Under forward reservation, the
virtual channels are locked by a control message that
travels from the source node to the destination node.
Under backward reservation, a control message travels
to the destination to probe the path, then virtual
channels that are found to be available are locked by
another control message which travels from the destination
node to the source node.
Dropping versus holding. This variation characterizes
the behavior of the protocol when it determines
that a connection establishment does not progress.
Under the dropping approach, once the protocol determines
that the establishment of a connection is not
progressing, it releases the virtual channels locked on
the partially established path and informs the source
node that the reservation fails. Under the holding ap-
proach, when the protocol determines that the establishment
of a connection is not progressing, it keeps
the virtual channels on the partially established path
locked for some period of time, hoping that during
this period, the reservation will progress. If, after this
timeout period, the reservation still does not progress,
the partial path is then released and the source node
is informed of the failure. Dropping can be viewed as
holding with holding time equal to 0.
Aggressive reservation versus conservative reserva-
tion. This variation characterizes the protocol's treatment
of each reservation. Under the aggressive reser-
vation, the protocol tries to establish a connection by
locking as many virtual channels as possible during
the reservation process. Only one of the locked channels
is then used for the connection, while the others
are released. Under the conservative reservation,
the protocol locks only one virtual channel during the
reservation process.
Deadlock
Deadlock in the control network can arise from two
sources. First, with limited number of buffers, a request
loop can be formed within the control network.
Second, deadlock can occur when a request is holding
(locking) virtual channels on some links while requesting
other channels on other links. This second
source of deadlock can be avoided by the dropping or
holding mechanisms described above. Specifically, a
request will give up all the locked channels if it does
not progress within a certain timeout period.
Many deadlock avoidance or deadlock prevention
techniques for packet switching networks proposed in
the literature can be used to deal with deadlock within
the control network (the first source of deadlock).
Moreover, the control network is under light traffic,
and each control message consists of only a single
packet of small size (4 bytes). Hence, it is feasible
to provide a large number of buffers in each router
to reduce or eliminate the chance of deadlock. In the
simulations presented in Section 5 for comparing the
reservation schemes, we will nullify the effect of dead-lock
in the control network by assuming an infinite
number of control packet buffers at each node. This
will allow us to concentrate on the effect of the reservation
protocols on the efficiency of the multiplexed
data network.
States of Virtual Channels
The control network router at each node maintains
a state for each virtual channel on links connected to
the router. For forward reservation, the control router
maintains the states for the outgoing links, while in
backward reservation, the control router maintains
the states for the incoming links. As discussed later,
this setting enables the router to have the information
needed for reserving virtual channels and updating the
switch states. A virtual channel, V , on link L, can be
in one of the following states:
ffl AV AIL: indicates that the virtual channel V on
link L is available and can be used to establish a
new connection,
ffl LOCK: indicates that V is locked by some request
in the process of establishing a connection.
that V is being used by some
established connection to transmit data.
For a link, L, the set of virtual channels that are
in the AV AIL state is denoted as Avail(L). When a
virtual channel, V , is not in Avail(L), an additional
field, CID, is maintained to identify the connection
request locking V , if V is in the LOCK state, or the
connection using V , if V is in the BUSY state.
3 Forward Reservation Schemes
In the connection establishment protocols, each
connection request is assigned a unique identifier, id,
which consists of the identifier of the source node and
a serial number issued by that node. Each control
message related to the establishment of a connection
carries its id, which becomes the identifier of the con-
nection, when successfully established. It is this id
that is maintained in the CID field of locked or busy
virtual channels on links. Four types of packets are
used in the forward reservation protocols to establish
a connection.
ffl Reservation packets (RES), used to reserve virtual
channels. In addition to the connection id, a RES
packet contains a bit vector, cset, of size equal to the
number of virtual channels in each link. The bit vector
cset is used to keep track of the set of virtual channels
that can be used to satisfy the connection request
carried by RES. These virtual channels are locked at
intermediate nodes while the RES message progresses
towards the destination node. The switch states are
also set to connect the locked channels on the input
and output links.
ffl Acknowledgment packets (ACK), used to inform
source nodes of the success of connection requests.
An ACK packet contains a channel field which indicates
the virtual channel selected for the connection.
As an ACK packet travels from the destination to
the source, it changes the state of the virtual channel
selected for the connection to BUSY , and unlocks
(changes from LOCK to AV AIL) all other virtual
channels that were locked by the corresponding RES
packet.
ffl Fail or Negative ack packets used
to inform source nodes of the failure of connection
requests. While traveling back to the source node,
a FAIL=NACK packet unlocks all virtual channels
that were locked by the corresponding RES packet.
ffl Release packets (REL), used to release connections.
A REL packet traveling from a source to a destination
changes the state of the virtual channel reserved for
that connection from BUSY to AV AIL.
The protocols require that control packets from a
destination, d, to a source, s, follows the same paths
(in opposite directions) as packets from s to d. We will
denote the fields of a packet by packet:field. For ex-
ample, RES:id denotes the id field of the RES packet.
The forward reservation with dropping works as
follows. When the source node wishes to establish
a connection, it composes a RES packet with
RES:cset set to the virtual channels that the node
may use. This message is then routed to the destina-
tion. When an intermediate node receives the RES
packet, it determines the next outgoing link, L, on
the path to the destination, and updates RES:cset
to RES:cset " Avail(L). If the resulting RES:cset
is empty, the connection cannot be established, and
a FAIL=NACK message is sent back to the source
node. The source node will retransmit the request after
some period of time. This process of failed reservation
is shown in Figure 3(a). Note that if Avail(L) is
represented by a bit-vector, then RES:cset " Avail(L)
is a bit-wise "AND" operation.
RES
send data
REL
Source
(b)
Dest
RES
retransmit time
Inter.
Source
(a)
Dest
Figure
3: Control messages in forward reservation
If the resulting RES:cset is not empty, the router
reserves all the virtual channels in RES:cset on link
L by changing their states to LOCK and updating
Avail(L). The router will then set the switch state to
connect the virtual channels in the resulting RES:cset
of the corresponding incoming and outgoing links.
Maintaining the states of outgoing links is sufficient for
these two tasks. The RES message is then forwarded
to the next node on the path to the destination. This
way, as RES approaches the destination, the path is
reserved incrementally. Once RES reaches the destination
with a non-empty RES:cset, the destination
selects from RES:cset a virtual channel to be used for
the connection and informs the source node that the
channel is selected by sending an ACK message with
ACK:channel set to the selected virtual channel. The
source can start sending data once it receives the ACK
packet. After all data is sent, the source node sends
a REL packet to tear down the connection. This successful
reservation process is shown in Figure 3 (b).
Note that although in the algorithm described above,
the switches are set during the processing of the RES
packet, they can instead be set during the processing
of the ACK packet.
Holding: The protocol described above can be modified
to use the holding policy instead of the dropping
policy. Specifically, when an intermediate node determines
that the connection for a reservation cannot be
established, that is when RES:cset "
the node buffers the RES packet for a limited period
of time. If within this period, some virtual channels
in the original RES:cset become available, the RES
packet can then continue its journey. Otherwise, the
FAIL=NACK packet is sent back to the source.
Aggressiveness: The aggressiveness of the reservation
is reflected in the size of the virtual channel
set, RES:cset, initially chosen by the source node.
In the most aggressive scheme, the source node sets
RES:cset to f0; :::; N \Gamma 1g, where N is the number of
virtual channels in the system. This ensures that the
reservation will be successful if there exists an available
virtual channel on the path. On the other hand,
the most conservative reservation assigns RES:cset to
include only a single virtual channel. In this case, the
reservation can be successful only when the virtual
channel chosen by the source node is available in all
the links on the path. Although the aggressive scheme
seems to have advantage over the conservative scheme,
it results in overly locking the virtual channels in the
system. Thus, in heavily loaded networks, this is expected
to decrease the overall throughput. To obtain
optimal performance, the aggressiveness of the protocol
should be chosen appropriately between the most
aggressive and the most conservative extremes.
The retransmit time is another protocol parameter.
In traditional non-multiplexed networks, the retransmit
time is typically chosen randomly from a range
[0,MRT], where MRT denotes some maximum retransmit
time. In such systems, MRT must be set to a reasonably
large value to avoid live-lock. However, this
may increase the average message latency time and decrease
the throughput. In a multiplexed network, the
problem of live-lock only occurs in the most aggressive
scheme (non-multiplexed circuit switching networks
can be considered as having a multiplexing degree
of 1 and using aggressive reservation). For less aggressive
schemes, the live-lock problem can be avoided
by changing the virtual channels selected in RES:cset
when RES is retransmitted. Hence, for these schemes,
a small retransmit time can be used.
retransmit time
Source
RES
FAIL
(b)
Source Dest
REL
RES
send data
(c)
retransmit time
Source Inter.
(a)
Dest Inter. Dest
Figure
4: Control messages in backward reservation
4 Backward Reservation Schemes
In the forward locking protocol, the initial decision
concerning the virtual channels to be locked for a connection
request is made in the source node without
any information about network usage. The backward
reservation scheme tries to overcome this handicap by
probing the network before making the decision. In
the backward reservation schemes, a forward message
is used to probe the availability of virtual channels.
After that, the locking of virtual channels is performed
by a backward message. The backward reservation
scheme uses six types of control packets, all of which
carry the connection id, in addition to other fields as
discussed next:
ffl Probe packets (PROB), that travel from sources
to destinations gathering information about virtual
channel usage without locking any virtual channel. A
PROB packet carries a bit vector, init, to represent
the set of virtual channels that are available to establish
the connection.
ffl Reservation packets (RES), similar to the RES
packets in the forward scheme, except that they travel
from destinations to sources, locking virtual channels
as they go through intermediate nodes, and setting
the states of the switches accordingly. A RES packet
contains a cset field.
ffl Acknowledgment packets (ACK), similar to ACK
packets in the forward scheme except that they travel
from sources to destinations. An ACK packet contains
a channel field.
ffl Fail packets (FAIL), to unlock the virtual channels
locked by the RES packets in cases of failures to establish
connections.
Negative acknowledgment packets (NACK), used to
inform the source nodes of reservation failures.
ffl Release packets (REL), used to release connections
after the communication is completed.
Note that a FAIL=NACK message in the forward
scheme performs the functions of both a FAIL message
and a NACK message in the backward scheme.
The backward reservation with dropping works as
follows. When the source node wishes to establish
a connection, it composes a PROB message with
PROB:init set to contain all virtual channels in the
system. This message is then routed to the des-
tination. When an intermediate node receives the
PROB packet, it determines the next outgoing link,
f , on the forward path to the destination, and updates
PROB:init to PROB:init " Avail(L f
). If the
resulting PROB:init is empty, the connection cannot
be established and a NACK packet is sent back to
the source node. The source node will try the reservation
again after a certain retransmit time. Figure 4(a)
shows this failed reservation case.
If the resulting PROB:init is not empty, the node
forwards PROB on L f
to the next node. This way, as
PROB approaches the destination, the virtual channels
available on the path are recorded in the init set.
Once PROB reaches the destination, the destination
forms a RES message with RES:cset equal to a selected
subset of PROB:init and sends this message
back to the source node. When an intermediate node
receives the RES packet, it determines the next link,
, on the backward path to the source, and updates
RES:cset to RES:cset " Avail(L b
). If the resulting
RES:cset is empty, the connection cannot be estab-
lished. In this case the node sends a NACK message
to the source node to inform it of the failure, and sends
a FAIL message to the destination to free the virtual
channels locked by RES. This process is shown in
Figure
4(b).
If the resulting RES:cset is not empty, the virtual
channels in RES:cset are locked, the switch is set accordingly
and RES is forwarded on L b to the next
node. When RES reaches the source with a non-empty
RES:cset, the source selects a virtual channel
from the RES:cset for the connection and sends an
ACK message to the destination with ACK:channel
set to the selected virtual channel. This ACK message
unlocks all the virtual channels locked by RES,
except the one in channel. The source node can start
sending data as soon as it sends the ACK message.
After all data is sent, the source node sends a REL
packet to tear down the connection. The process of
successful reservation is shown in Figure 4(c).
Holding: Holding can be incorporated in the backward
reservation scheme as follows. In the protocol,
there are two cases that cause the reservation to fail.
The protocol may determine that the reservation fails
when processing the PROB packet. In this case, no
holding is necessary since no resources have yet been
locked. When the protocol determines that the reservation
fails during the processing of a RES packet,
a holding mechanism similar to the one used in the
forward reservation scheme may be applied.
Aggressiveness: The aggressiveness of the backward
reservation protocols is reflected in the initial size of
cset chosen by the destination node. The aggressive
approach sets RES:cset equal to PROB:init, while
the conservative approach sets RES:cset to contain a
single virtual channel from PROB:init. Note that if
a protocol supports only the conservative scheme, the
ACK messages may be omitted, and thus only five
types of messages are needed. As in the forward reservation
schemes, the retransmit time is a parameter in
the backward schemes.
Performance Evaluation
In the following discussion, we will use F to denote
reservation, B to denote the backward reser-
vation, H for holding and D for dropping schemes.
For example, FH means the forward holding scheme.
We have implemented a network simulator with various
control mechanisms including FH, FD, BH and
BD. Although the simulator can simulate both WDM
and TDM torus networks, only the results for TDM
networks will be presented in this paper. The results
for WDM networks follow similar patterns. The simulation
uses the following parameters.
ffl initial cset size: This parameter determines the
initial size of cset in the reservation packet. For
FD and FH, the initial cset is chosen when the
source node composes the RES packet. Assuming
that N is the multiplexing degree in the sys-
tem, an RES:cset of size s is chosen by generating
a random number, m, in the range [0;N\Gamma1],
and assigning RES:cset = fm mod N;m
modNg. In the backward
schemes, the initial cset is set when the
destination node composes the ACK packet. An
ACK:cset of size s is generated in the following
manner. If the available set, RES:INIT , has
less available channels than s, the RES:INIT
is copied to ACK:cset. Otherwise, the available
channels are represented in a linear array and the
method used in generating the cset in the forward
schemes is used.
ffl timeout value: This value determines how long a
reservation packet can be put in a waiting queue.
The dropping scheme can be considered as a holding
scheme with timeout time equal to 0.
ffl maximum retransmit time (MTR): This specifies
the period after which a node will retry a failed
reservation. As discussed earlier, this value is crucial
for avoiding live-lock in the most aggressive
schemes. The actual retransmit time is chosen
randomly between 0 and MRT \Gamma 1.
ffl system size: This specifies the size of the network.
All our simulations are done on torus topology.
multiplexing degree. This specifies the number of
virtual channels supported by each link. In our
simulation, the multiplexing degree ranges from
1 to 32.
ffl message size: This directly affects the time that
a connection is kept before it is released. In our
simulations, fixed size messages are assumed.
request generation rate at each node (r): This
specifies the traffic on the network. The connection
requests at each node is assumed to have
a Poisson inter-arrival distribution. When a request
is generated at a node, the destination of
the request is generated randomly. When a generated
request is blocked, it is put into a queue,
waiting to be re-transmitted.
ffl control packet processing and propagation time:
This specifies the speed of the control networks.
The control packet processing time is the time for
an intermediate node to process a control packet.
The propagation time is the time for a control
packet to be transferred from one node to the
next. We assume that all the control packets have
the same processing and propagation time.
We use the average latency and throughput to evaluate
the protocols. The latency is the period between
the time when a message is ready and the time when
the first packet of the message is sent. The through-put
is the number of messages received per time unit.
Under light traffic, the performance of the protocols
is measured by the average message latency, while under
heavy traffic, the throughput is used as the performance
metric. The simulation time is measured in
time slots, where a time slot is the time to transmit
an optical data packet between any two nodes in the
network. Note that in multiprocessing applications,
nodes are physically close to each other, and thus signal
propagation time is very small (1 foot per nsec)
compared to the length of a message. Finally, deterministic
XY-routing is assumed in the torus topology.
Figure
5 depicts the throughput and average latency
as a function of the request generation rate for
six protocols that use the dropping policy in a 16 \Theta 16
torus. The multiplexing degree is taken to be 32, the
message size is assumed to be 8 packets and the control
packets processing and propagation time is assumed to
be units. For each of the forward and backward
schemes, three variations are considered with varying
aggressiveness. The conservative variation in which
the initial cset size is 1, the most aggressive variation
in which the initial set size is equal to the multiplexing
degree and an optimal variation in which the initial
set size is chosen (by repeated trials) to maximize
the throughput. The letters C, A and O are used to
denote these three variations, respectively. For exam-
ple, FDO means the forward dropping scheme with
optimal cset size. Note that the use of the optimal
cset size reduces the delay in addition to increasing
the throughput. Note also that the network saturates
when the generation rate is between 0.006 and 0.018,
depending on the protocol used.
Figure
5 also reveals that, when the request generation
rate, r, is small, for example 0:003, the net-work
is under light traffic and all the protocols achieve
the same throughput, which is equal to r times the
number of processors. In this case, the performance
of the network should be measured by the average la-
tency. In the rest of the performance study, we will
use the maximum throughput (at saturation) and the
average latency (at to measure the performance
of the protocols. We perform two sets of
experiments. The first set evaluates the effect of the
protocol parameters on the network throughput and
delay, and the second set evaluates the impact of system
parameters on performance.1234
Throughput
Request Generation Rate
BDO
BDA
FDO
Latency
Request Generation Rate
BDO
BDA
FDO
FDA
Figure
5: Comparison of reservations with dropping
5.1 Effect of protocol parameters
In this set of experiments, we study the effect of the
initial cset size, the holding time and the retransmit
time on the performance of the protocols. the system
parameters for this set of experiment are chosen as
follows: System
ets, control packet processing and propagation
Figure
6 shows the effect of the initial cset size on
the forward holding scheme with different multiplexing
degrees, namely 1, 2, 4, 8, 16 and 32. The holding
time is taken to be units and the MTR is 5
time units for all the protocols with initial cset size
less than the multiplexing degree and
for the most aggressive forward scheme. Large MTR
is used in the most aggressive forward scheme because
we observed that small MTR often leads to live-lock
in that scheme. We show only the protocols with the
holding policy since using the dropping policy leads to
similar patterns. The effect of holding/dropping will
be considered in a later figure. Figure 7 shows the
results for the backward schemes with dropping.0.51.52.53.5
throughput
initial cset size
latency
initial cset size
Figure
Effect of the cset size on forward schemes
From
Figure
6 , we can see that when the multiplexing
degree is larger than 8, both the most conservative
protocol and the most aggressive protocol
do not achieve the best throughput. Figure 6 shows
that these two extreme protocols do not achieve the
smallest latency either. The same observation applies
to the backward schemes in Figure 7. The effect of
choosing the optimal initial cset is significant on both
throughput and delay. That effect, however, is more
significant in the forward scheme than in the backward
scheme. For example, with multiplexing degree
choosing a non-optimal cset size may reduce the
throughput by 50% in the forward scheme and only by
25% in the backward scheme. In general, the optimal
initial cset size is hard to find. A rule of thumb arrived
at experimentally to approximate the optimal cset size
is to use 1/3 and 1/10 of the multiplexing degree for
forward schemes and backward schemes, respectively.
Figure
8 shows the effect of the holding time on the
performance of the protocols for a multiplexing degree
of 32. As shown in Figure 8, the holding time has
little effect on the maximum throughput. It slightly
increases the performance for the FA (forward aggres-
sive) and BA (backward aggressive) schemes. As for
the average latency at light working load, the holding
time also has little effect except for the FA scheme,
where the latency time decreases by about 20% when
the holding time at each intermediate node increases
from 0 to holding requires extra
hardware support compared to dropping, we conclude
that holding is not cost-effective for the reservation
protocols. In the rest of the paper, we will only consider
protocols with dropping policies.0.51.52.53.51 3 5 8
throughput
initial cset size
latency
initial cset size
Figure
7: Effect of the cset size on backward schemes0.51.52.53.54.5
Throughput
Holding Cycle
BO
FO
BA
Latency
Holding Cycle
BO
FO
BA
FA
Figure
8: Effect of holding time
In other experiments, we have studied the effect of
the maximum retransmit time (MRT) on the perfor-
mance. We have found that increasing MRT results
in a slight performance degradation in all the schemes
except FA, in which the performance improves with
the MRT. This confirms that the MRT value is important
to avoid live-lock in the network when aggressive
reservation is used. In other schemes this parameter is
not important, because when retransmitting a failed
request, virtual channels different than the ones that
have been tried may be included in cset. This result
indicates another drawback of the forward aggressive
schemes: in order to avoid live-lock, the MRT must be
a reasonably large value, which decreases the overall
performance.
The results of the above set of experiments may be
summarized as follows:
ffl With proper protocols, multiplexing results in
higher maximum throughput. Multiplexed networks
are significantly more efficient than non-multiplexed
networks.
ffl Both the most aggressive and the most conservative
reservations cannot achieve optimal perfor-
mance. However, the performance of the forward
schemes is more sensitive to the initial cset size
than the performance of the backward schemes.
ffl The value of the holding time in the holding
schemes does not have significant impact on the
performance. In general, however, dropping is
more efficient than holding.
ffl The retransmit time has little impact on all the
schemes except the FA scheme.
In the next section, we will only consider dropping
schemes with MRT equal to 5 time units for all
schemes except FA, whose MRT is set to 60.
5.2 Effect of other system parameters
This set of experiments focuses on studying the performance
of the protocols under different multiplexing
degrees, system sizes, message sizes and control net-work
speeds. Only one parameter is changed in each
experiment, with the other parameters set to the following
default values (unless stated otherwise): net-work
message processing
and propagation
Figure
9 shows the performance of the protocols for
different multiplexing degrees. When the multiplexing
degree is small, BO and FO have the same maximum
bandwidth as BC and FC, respectively. When the
multiplexing degree is large, BO and FO offers better
throughput. In addition, for all multiplexing degrees,
BO is the best among all the schemes. As for the average
latency, both FA and BA have significantly larger
latency than all other schemes. Also, FO and BO have
the smallest latencies. We can see from this experiment
that the backward schemes always provide the
same or better performance (both maximum through-put
and latency) than their forward reservation counterparts
for all multiplexing degrees considered.
Throughput
Multiplexing Degree
BO
BA
FO
Latency
Multiplexing Degree
BO
BA
FO
FA
Figure
9: Effect of the multiplexing degree1.52.53.5
Throughput
Torus size (N x N)
BO
BA
FO
FA
Figure
10: Effect of the network
Throughput
Message size
BO
BA
FO
FA
Figure
11: Effect of the message size
Figure
shows the effect of the network size on
the performance of the protocols. We can see from
the figure that all the protocols, except the aggressive
ones, scale nicely with the network size. This indicates
that the aggressive protocols cannot take advantage of
the spatial diversity of the communication. This is a
result of excessive reservation of channels. When the
network size is small, there is little difference in the
performance of the protocols. When the network size
is larger, the backward schemes show their superiority.
Figure
11 shows the effect of the message size on
the protocols. The throughput is normalized to reflect
the number of packets that pass through the net-
work, rather than the number of messages. When messages
are sufficiently large, the signaling overhead in
the protocols is small and all protocols have almost
the same performance. However, when the message
size is small, the BO scheme achieves higher through-put
than the other schemes. This indicates that BO
incurs less overhead in the path reservation than the
other schemes.1357
Throughput
Processing Propagation Time
BO
BA
FO
FA
Figure
12: Effect of the speed of the control network
Figure
12 shows the effect of the control network
speed on performance. The multiplexing degree in
this experiment is 32. The most aggressive schemes in
both forward and backward reservations, however, are
more sensitive to the control network speed. Hence, it
is important to have a reasonably fast control network
when these reservation protocols are used.
The results of the above set of experiments may be
summarized as follows:
ffl The performance of FA is significantly worse than
other protocols. Moreover, this protocol cannot
take advantage of both larger multiplexing degree
and larger network size.
ffl The backward reservation schemes provide better
performance than the forward reservation
schemes for all multiplexing degrees.
ffl The backward schemes provide better performance
when the message size is small and when
the network size is large. When the message size
is large or the network size is small, all the protocols
have similar performance.
ffl The speed of the control network affects the performance
of the protocols greatly.
6 Concluding Remarks
In this paper, we have described various protocols
for virtual path reservation in directly connected,
multiplexed, all-optical networks. The protocols are
classified into two categories: forward reservation and
backward reservation.
Extensive experiments were carried out to compare
these two classes of protocols in torus networks.
We found the following results about the protocols.
First, the initial cset size largely affects the perfor-
mance. For large multiplexing degree, the optimal
cset size generally lies between the two obvious extremes
of locking one channel and locking all the chan-
nels. Choosing the optimal cset size can improve
the performance by about 100% in the forward reservation
schemes and 25% in the backward schemes.
Second, the holding mechanism, which requires additional
hardware, does not improve the performance
of the protocols in a tangible way, and thus is not
cost-effective. Third, although the retransmit time is
an important factor for non-multiplexed networks, it
does not affect the performance of a multiplexed net-work
except when the forward aggressive scheme is
used.
We also studied the effect of the system parameters
on the performance of the protocols. We found
that for large message sizes and fast control networks,
the control overhead is small compared to the data
transmission time, and thus all the protocols exhibit
the same performance. When the control overhead is
significant, the backward schemes always offer better
performance than their forward counterparts. Irrespective
of the control protocol, the results show that
multiplexing the network always increases its through-
put, and up to a certain multiplexing degree, always
decreases the average message delay.
There are two main advantages for multiplexing optical
networks. First, multiplexing increases the number
of connections that can be simultaneously established
in the network, thus increasing the chance of
successfully establishing a connection. This reduces
the traffic in the control network, which in turns reduces
the control overhead. The second advantage of
multiplexing optical networks is to bridge the gap between
the large bandwidth of optical transmission and
the low data generation rate at each node, especially
if transmitted data is to be fetched from memory. In
other words, if data cannot be fetched from memory
fast enough to match the optical transmission band-
width, then dedicating an optical path to one connection
will waste communication bandwidth. In such
cases, multiplexing allows the large optical bandwidth
to be shared among multiple connections. In the simulations
presented in this paper, we did not consider
the effect of such a mismatch between memory speed
and optical bandwidth. This effect is being currently
studied and will be presented in another forum.
--R
"Media Access Techniques: the Evolution towards Terabit/s LANs and MANs."
"An Overview of Lightwave Packet Network."
"Models of Blocking Probability in All-optical Networks with and without Wavelength Changers."
"The Manhattan Street Network: A High Performance, Highly Reliable Metropolitan Area Network,"
"Dense wavelength division multiplexing networks: Principles and applications,"
"A Time-Wavelength assignment algorithm for WDM Start Networks"
"Photonic Switching Using Directional Couplers"
"Time-Multiplexing Optical Interconnection Networks; Why Does it Pay Off?"
"The iPSC/2 direct-connect communications technology."
"Optimal Routing and Wavelength Assignment in All-Optical Networks."
"Reconfiguration with Time Division Multiplexed MIN's for Multiprocessor Communications."
"Reducing Communication Latency with Path Multiplexing in Optically Interconnected Multiprocessor Systems"
"Wavelength Reservation Under Distributed Control."
"Connectivity and Sparse Wavelength Conversion in Wavelength-Routing Networks."
--TR
--CTR
Xin Yuan , Rami Melhem , Rajiv Gupta, Algorithms for Supporting Compiled Communication, IEEE Transactions on Parallel and Distributed Systems, v.14 n.2, p.107-118, February
Roger D. Chamberlain , Mark A. Franklin , Ch'ng Shi Baw, Gemini: An Optical Interconnection Network for Parallel Processing, IEEE Transactions on Parallel and Distributed Systems, v.13 n.10, p.1038-1055, October 2002 | routing protocols;distributed control;mesh-like networks;optical interconnection networks;path reservation;wavelength-division multiplexing;time-division multiplexing |
330512 | How Useful Is Old Information?. | AbstractWe consider the problem of load balancing in dynamic distributed systems in cases where new incoming tasks can make use of old information. For example, consider a multiprocessor system where incoming tasks with exponentially distributed service requirements arrive as a Poisson process, the tasks must choose a processor for service, and a task knows when making this choice the processor queue lengths from $T$ seconds ago. What is a good strategy for choosing a processor in order for tasks to minimize their expected time in the system? Such models can also be used to describe settings where there is a transfer delay between the time a task enters a system and the time it reaches a processor for service. Our models are based on considering the behavior of limiting systems where the number of processors goes to infinity. The limiting systems can be shown to accurately describe the behavior of sufficiently large systems and simulations demonstrate that they are reasonably accurate even for systems with a small number of processors. Our studies of specific models demonstrate the importance of using randomness to break symmetry in these systems and yield important rules of thumb for system design. The most significant result is that only small amounts of queue length information can be extremely useful in these settings; for example, having incoming tasks choose the least loaded of two randomly chosen processors is extremely effective over a large range of possible system parameters. In contrast, using global information can actually degrade performance unless used carefully; for example, unlike most settings where the load information is current, having tasks go to the apparently least loaded server can significantly hurt performance. | Introduction
Distributed computing systems, such as networks of workstations or mirrored sites
on the World Wide Web, face the problem of using their resources effectively. If some
hosts lie idle while others are heavily loaded, system performance can fall significantly.
To prevent this, load balancing is used to distribute the workload, improving performance
measures such as the expected time a task spends in the system. Although
determining an effective load balancing strategy depends strongly on the details of the
underlying system, general models from both queueing theory and computer science
often provide valuable insight and general rules of thumb.
In this paper, we develop analytical models for the realistic setting where old load
information is available. For example, suppose we have a system of n servers, and
incoming tasks must choose a server and wait for service. If the incoming tasks know
the current number of tasks already queued at each server, it is often best for the task
to go to the server with the shortest queue [22]. In many actual systems, however,
it is unrealistic to assume that tasks will have access to up to date load information;
global load information may be updated only periodically, or the time delay for a
task to move to a server may be long enough that the load information is out of date
by the time the task arrives. In this case, it is not clear what the best load balancing
strategy is.
Our models yield surprising results. Unlike similar systems in which up to date
information is available, the strategy of going to the shortest queue can lead to extremely
bad behavior when load information is out of date; however, the strategy
of going to the shortest of two randomly chosen queues performs well under a large
range of system parameters. This result suggests that systems which attempt to exploit
global information to balance load too aggressively may suffer in performance,
either by misusing it or by adding significant complexity.
1.1 Related Previous Work
The problem of how to use old or inexact information is often neglected in theoretical
work, even though balancing workload from distributed clients based on incomplete or
possibly out of date server load information may be an increasingly common system
requirement. A recent work by Awerbuch, Azar, Fiat, and Leighton [2] covers a
similar theme, although their models are substantially different from ours.
The idea of each task choosing from a small number of processors in order to
balance the load has been studied before, both in theoretical and practical contexts. In
many models, using just two choices per task can lead to an exponential improvement
over one choice in the maximum load on a processor. In the static setting, this
improvement appears to have first been noted by Karp, Luby, and Meyer auf der
Heide [10]. A more complete analysis was given by Azar, Broder, Karlin, and Upfal
[3]. In the dynamic setting, this work was extended to a queueing theoretic model in
[15, 16]; similar results were independently reported in [26].
Other similar previous work includes that of Towsley and Mirchandaney [21] and
that of Mirchandaney, Towsley, and Stankovic [12, 13]. These authors examine how
some simple load sharing policies are affected by communication delay, extending
a similar study of load balancing policies by Eager, Lazowska, and Zahorjan [5, 6].
Their analyses are based on Markov chains associated with the load sharing policies
they propose and simulations.
Our work is most related to the queueing models of the above work, although it
expands on this work in several directions. We apply a fluid-limit approach, in which
we develop a deterministic model corresponding to the limiting system as n ! 1.
We often call this system the limiting system. This approach has successfully been
applied previously to study load balancing problems in [1, 15, 16, 17, 19, 26] (see also
[1] for more references, or [18, 25] for the use of this approach in different settings),
and it can be seen as a generalization of the previous Markov chain analysis. Using
this technique, we examine several new models of load balancing in the presence of old
information. In conjunction with simulations, our models demonstrate several basic
but powerful rules of thumb for load balancing systems, most notably the effectiveness
of using just two choices.
The remainder of this paper is organized as follows: in Section 2, we describe
a general queueing model for the problems we consider. In Sections 3, 4, and 5,
we consider different models of old information. For each such model, we present
a corresponding limiting system, and using the limiting systems and simulations we
determine important behavioral properties of these models. In Section 6, we briefly
consider the question of cheating tasks, a concept that ties our models to natural but
challenging game theoretic questions. We conclude with a section on open problems
and further directions for research.
2 The Bulletin Board Model
Our work will focus on the following natural dynamic model: tasks arrive as a Poisson
stream of rate -n, where - ! 1, at a collection of n servers. Each task chooses one of
the servers for service and joins that server's queue; we shall specify the policy used
to make this choice subsequently. Tasks are served according to the First In First
Out protocol, and the service time for a task is exponentially distributed
with mean 1. We are interested in the expected time a task spends in the system in
equilibrium, which is a natural measure of system performance, and more generally
in the distribution of the time a customer spends in the queue. Note that the average
arrival rate per queue is - ! 1, and that the average service rate is assuming
the tasks choose servers according to a reasonable strategy, we expect the system to
be stable, in the sense that the expected number of tasks per queue remains finite in
equilibrium. In particular, if each task chooses a server independently and uniformly
at random, then each server acts as an M/M/1 queue (Poisson arrivals, exponentially
distributed service times) and is hence clearly stable. We will examine the behavior
of this system under a variety of methods that tasks may use to choose their server.
We will allow the tasks choice of server to be determined by load information from
the servers. It will be convenient if we picture the load information as being located at
a bulletin board. We strongly emphasize that the bulletin board is a purely theoretical
construct used to help us describe various possible load balancing strategies and need
not exist in reality. The load information contained in the bulletin board need not
correspond exactly to the actual current loads; the information may be erroneous or
approximate. Here, we focus on the problem of what to do when the bulletin board
contains old information (where what we mean by old information will be specified
in future sections).
We shall focus on distributed systems, by which we mean that the tasks cannot
directly communicate in order to coordinate where they go for service. The decisions
made by the tasks are thus based only on whatever load information they obtain
and their entry time. Although our modeling technique can be used for a large class
of strategies, in this paper we shall concentrate on the following natural, intuitive
strategies:
ffl Choose a server independently and uniformly at random.
ffl Choose d servers independently and uniformly at random, check their load information
from the bulletin board, and go to the one with the smallest load. 1
ffl Check all load information from the bulletin board, and go to the server with
the smallest load.
The strategy of choosing a random server has several advantages: it is easy to
implement, it has low overhead, it works naturally in a distributed setting, and it is
known that the expected lengths of the queues remain finite over time. However, the
strategy of choosing a small number of servers and queueing at the least loaded has
been shown to perform significantly better in the case where the load information
is up to date [5, 15, 16, 26]. It has also proved effective in other similar models
[3, 10, 16]. Moreover, the strategy also appears to be practical and have a low
1 In this and other strategies, we assume that ties are broken randomly. Also, the d choices are
made without replacement in our simulations; in the limiting system setting, the difference between
choosing with and without replacement is negligible.
overhead in distributed settings, where global information may not be available, but
polling a small number of processors may be possible. Going to the server with the
smallest load appears natural in more centralized systems where global information
is maintained. Indeed, going to the shortest queue has been shown to be optimal in a
variety of situations in a series of papers, starting for example with [22, 24]. Hence it
makes an excellent point of comparison in this setting. Other simple schemes that we
do not examine here but can easily be studied with this model include threshold based
schemes [5, 17], where a second choice is made only if the first appears unsatisfactory.
We develop analytical results for the limiting case as n ! 1, for which the
system can be accurately modeled by an limiting system. The limiting system consists
of a set of differential equations, which we shall describe below, that describe in
some sense the expected behavior of the system. This corresponds to the exact
behavior of the system as n ! 1. More information on this approach can be found
in [7, 11, 15, 16, 17, 19, 25, 26]; we emphasize that here we will not detour into a
theoretical justification for this limiting approach, and instead refer the reader to
these sources for more information. (We note, however, that this approach works
only because the systems for finite n have an appropriate form as a Markov chain;
indeed, we initially require exponential service times and Poisson arrivals to ensure
this form.) Previous experience suggests that using the limiting system to estimate
performance metrics such as the expected time in the system proves accurate, even
for relatively small values of n [5, 15, 16, 17]. We shall verify this for the models we
consider by comparing our analytical results with simulations.
3 Periodic Updates
The previous section has described possible ways that the bulletin board can be used.
We now turn our attention to how a bulletin a board can be updated. Perhaps the
most obvious model is one where the information is updated at periodic intervals.
In a client-server model, this could correspond to an occasional broadcast of load
information from all the servers to all the clients. Because such a broadcast is likely
to be expensive (for example, in terms of communication resources), it may only be
practical to do such a broadcast at infrequent intervals. Alternatively, in a system
without such centralization, servers may occasionally store load information in a
readable location, in which case tasks may be able to obtain old load information
from a small set of servers quickly with low overhead.
We therefore suggest the periodic update model, in which the bulletin board is
updated with accurate information every T seconds. Without loss of generality, we
shall take the update times to be 0; :. The time between updates shall be
called a phase, and phase i will be the phase that ends at time iT . The time that the
last phase began will be denoted by T t , where t is the current time.
The limiting system we consider will utilize a two-dimensional family of variables
to represent the state space. We let P i;j (t) be the fraction of queues at time t that
have true load j but have load i posted on the bulletin board. We let q i (t) be the rate
of arrivals at a queue of size i at time t; note that, for time-independent strategies,
the rates q i (t) depend only on the load information at the bulletin boards and the
strategy used by the tasks, and hence is the same as q i (T t ). In this case, the rates q i
change whenever the bulletin board is updated.
We first consider the behavior of the system during a phase, or at all times t 6= kT
for integers k - 0. Consider a server showing i customers on the bulletin board, but
having j customers: we say such a server is in state (i; j). Let 1. What is the
rate at which a server leaves state (i; j)? A server leaves this state when customer
departs, which happens at rate or a customer arrives, which happens at rate
Similarly, we may ask the rate at which customers enter such a state. This can
happen if a customer arrives at a server with load i posted on the bulletin board but
having customers, or a customer departs from a server with load i posted on
the bulletin board but having customers. This description naturally leads us to
model the behavior of the system by the following set of differential equations:
dP i;0 (t)
dt
dP i;j (t)
dt
These equations simply measure the rate at which servers enter and leave each state.
(Note that the case is a special case.) While the queueing process is random,
however, these differential equations are deterministic, yielding a fixed trajectory
once the initial conditions are given. In fact, these equations describe the limiting
behavior of the process as n !1, as can be proven with standard (albeit complex)
methods [7, 11, 16, 17, 19, 25, 26]. Here we take these equations as the appropriate
limiting system and focus on using the differential equations to study load balancing
strategies.
For integers k - 0, at there is a state jump as the bulletin board is
updated. At such t, necessarily P i;j as the load of all servers is
correctly portrayed by the bulletin board. If we let P i;j so that
the just before an update, then
3.1
We consider what the proper form of the rates q i are for the strategies we examine.
It will be convenient to define the load variables b i (t) be the fraction of servers with
load i posted on the bulletin board; that is, b i
In the case where a task chooses d servers randomly, and goes to the one with the
smallest load on the bulletin board, we have the arrival rate
The numerator is just the probability that the shortest posted queue length of the d
choices on the bulletin board is size i. To get the arrival rate per queue, we scale by
-, the arrival rate per queue, and b i (t), the total fraction of queues showing i on the
board. In the case where d = 1, the above expression reduces to q i
servers have the same arrival rate, as one would expect.
To model when tasks choose the shortest queue on the bulletin board, we develop
an interesting approximation. We assume that there always exists servers posting
load 0 on the bulletin board, and we use a model where tasks go to a random server
with posted load 0. As long as we start with some servers showing 0 on the bulletin
board in the limiting system (for instance, if we start with an empty system), then we
will always have servers showing load 0, and hence this strategy is valid. In the case
where the number of queues is finite, of course, at some time all servers will show load
at least one on the billboard; however, for a large enough number of servers the time
between such events is large, and hence this model will be a good approximation. So
for the shortest queue policy, we set the rate
and all other rates q i (t) are 0.
3.2 The Fixed Cycle
In a standard deterministic dynamical system, a natural hope is that the system
converges to a fixed point, which is a state at which the system remains forever once
it gets there; that is, a fixed point would correspond to a point
dP i;j
dt
0. The above system clearly cannot reach a fixed point, since the updating of
the bulletin board at time causes a jump in the state; specifically, all P i;j with
become It is, however, possible to find a fixed cycle for the system. We find
a point P such that if
for all k - k 0 . In other words, we find a state such that if the limiting system begins
a phase in that state, then it ends the phase in the same state, and hence repeats
the same cycle for every subsequent phase. (Note that it also may be possible for the
process to cycle only after multiple phases, instead of just a single phase. We have
not seen this happen in practice, and we conjecture that it is not possible for this
system.)
To find a fixed cycle, we note that this is equivalent to finding a vector ~- i )
such that if - i is the fraction of queues with load i at the beginning of the phase, the
same distribution occurs at the end of a phase. Given an initial ~-, the arrival rate
at a queue with i tasks from time 0 to T can be determined. By our assumptions of
Poisson arrivals and exponential service times, during each phase each server acts as
an independent M/M/1 queue that runs for T seconds, with some initial number of
tasks awaiting service. We use this fact to find the - i .
Formulae for the distribution of the number of tasks at time T for an M/M/1
queue with arrival rate - and i tasks initially have long been known (for example,
see [4, pp. 60-64]); the probability of finishing with j tasks after T seconds, which we
denote by m i;j , is
where here B z (x) is the modified Bessel function of the first kind. If ~- gives the
distribution at the beginning and end of a phase, then the - i must satisfy
can be used to determine the - i .
It seems unlikely that we can use the above characterization to determine a simple
closed form for the state at the beginning of the phase of for the fixed cycle in
terms of T . In practice we find the fixed cycle easily by running a truncated version
of the system of differential equations (bounding the maximum values of i and
above until reaching a point where the change in the state between two consecutive
updates is sufficiently small. This procedure works under the assumption that the
trajectory always converges to the fixed cycle rapidly. (We discuss this more in the
next section.) Alternatively, from a starting state we can apply the above formulae
for m i;j to successively find the states at the beginning of each phase, until we find
two consecutive states in which the difference is sufficiently small. Simulating the
differential equations has the advantage of allowing us to see the behavior of the
system over time, as well as to compute system measurements such as the expected
time a task spends in the system.
3.3 Convergence Issues
Given that we have found a fixed cycle for the relevant limiting system, important
questions remain regarding convergence. One question stems from the approximation
of a finite system with the corresponding limiting system: how good is this approxi-
mation? The second question is whether the trajectory of the limiting system always
converges to its fixed cycle, and if so, how quickly?
For the first question, we note that the standard methods referred to previously
(based on work by Kurtz, [7, 11, 19]) provide only very weak bounds on the convergence
rate between limiting and finite systems. By focusing on a specific problem,
proving tighter bounds may be possible (see, for example, the discussion in [25]).
In practice, however, as we shall see in Section 3.4, the limiting system approach
proves extremely accurate even for small systems, and hence it is a useful technique
for gauging system behavior.
For the second question, we have found in our experiments that the system does
always converge to its fixed cycle, although we have no proof of this. The situation
is generally easier when the trajectory converges to a fixed point, instead of a fixed
cycle, as we shall see. (See also [16].) Proving this convergence hence remains an
interesting open theoretical question.
3.4 Simulations
We present some simulation results, with two main purposes in mind: first, we wish
to show that the limiting system approach does in fact yield a good approximation for
the finite case; second, we wish to gain insight into the problem load balancing using
old information. We choose to emphasize the second goal. As such, we plot data
from simulations of the actual queueing process (except in the case where one server
is chosen at random; in this case we apply standard formulae from queueing theory).
We shall note the deviation of the values obtained from the limiting system and these
simulations where appropriate. For most of our simulations, we focus on the expected
time in the system, as this appears the most interesting system measure. Because
our limiting approach provides a full description of the system state, however, it can
be used to predict other quantities of interest as well.
This methodology may raise the question of why the limiting system models are
useful at all. There are several reasons: first, simulating the differential equations is
often much faster than simulating the corresponding queueing system (we shall say
more on this later). Second, the limiting systems provide a theoretical framework
for examining these problems that can lead to formal theorems. Third, the limiting
system provides good insight into and accurate approximations of how the system be-
haves, independent of the number of servers. This information should prove extremely
Update every T seconds
Update interval T
Average
Choices
3 Choices
Shortest
l=0.5, -= 1.0
Figure
1: Strategy comparison at queues.
useful in practice.
In
Figures
1 and 2, the results for various strategies are given for arrival rates
servers. Simulations were performed for 50,000
time steps, with the first 5,000 steps ignored to allow the dependence on the initial
state to not affect the results. In all cases, the average time a task spends in the
system for the simulations with are higher than the expected time in the
corresponding limiting system. When 0:5, the deviation between the two results
are smaller than 1% for all strategies. When for the strategy of choosing
from two or three servers, the simulations are within 1-2% of the results obtained
from the limiting system. In the case of choosing the shortest queue, the simulations
are within 8-17% of the limiting system, again with the average time from simulations
being larger. We expect that this larger discrepancy is due to the inaccuracy of our
model for the shortest queue system, as described in Section 3.1; however, this is
suitably accurate to gauge system behavior. These results demonstrate the accuracy
of the limiting system approach.
Several surprising behaviors manifest in the figures. First, although choosing the
shortest queue is best when information is current very small values
of T the strategy performs worse than randomly selecting a queue, especially under
high loads (that is, large -). Although choosing the shortest queue is known to be
suboptimal in certain systems with current information [23], its failure in the presence
Update every T seconds
Update interval T
Average
Choices
3 Choices
Shortest
Figure
2: Strategy comparison at queues.
Update every T seconds
Update interval T
Average
Choices
3 Choices
Shortest
Figure
3: Strategy comparison at queues.
of old information is dramatic. Also, choosing from just two servers is the best of our
proposed strategies over a wide range of T , although for sufficiently large T making
a single random choice performs better.
We suggest some helpful intuition for these behaviors. If the update interval T is
sufficiently small, so that only a few new tasks arrive every T seconds, then choosing a
shortest queue performs very well, as tasks tend to wait at servers with short queues.
As T grows larger, however, a problem arises; all the tasks that arrive over those
seconds will go only to the small set of servers that appear lightly loaded on the
board, overloading them while other servers empty. The system demonstrates what
we call herd behavior: herds of tasks all move together to the same locations. As a
real-life example of this phenomenon, consider what happens at a supermarket when
it is announced that "Aisle 7 is now open." Very often Aisle 7 quickly becomes the
longest queue. This herd behavior has been noticed in real systems that use old
information in load balancing; for example, in a discussion of the TranSend system,
et. al. note that initially they found "rapid oscillations in queue lengths" because
their system updated load information periodically [9][Section 4.5].
Interestingly, as the update interval T ! 1, the utility of the bulletin board
becomes negligible (and, in fact, it can actually be misleading!), and the best strategy
approaches choosing a server at random. Although this intuition is helpful, it remains
surprising that making just two choices performs substantially better than even three
choices over a large interval of values of T that seem likely to arise in practice.
The same behavior is also apparent even with a much smaller number of servers.
In
Figure
3 we examine simulations of the same strategies with only eight servers,
which is a realistic number for a current multi-processor machine. In this case the
approximations given by the limiting system are less accurate, although for T ? 1
they are still within 20% of the simulations. Other simulations of small systems
demonstrate similar behavior, and as the number of servers n grows the limiting
system grows more accurate. Hence, even for small systems, the limiting system
approach provides reasonable estimates of system behavior and demonstrates the
trends as the update interval T grows.
Finally, we note again that in all of our simulations of the differential equations,
the limiting system rapidly reaches the fixed cycle suggested in Section 3.2.
On Simulating the Limiting System
Although the limiting system approach provides a useful technique for studying load
balancing models, it becomes difficult to use in the periodic update model (and other
models for old information) at high arrival rates or for large values of T , because
the number of variables to track grows large. For example, suppose we simulate the
differential equations, truncating the system at sufficiently large values of i and j that
we denote by I and J . Then we must keep track of IJ variables P i;j . At high arrival
rates (say, high values of T , we will need to make I and J both
extremely large to obtain accurate calculations, and hence simulating the differential
equations over a period of time becomes very slow, comparable to or worse than the
time required to simulated the underlying queueing system.
In practice, however, we expect such high arrival rates and extremely large values
of T are unlikely to be of interest. In the normal case, then, we expect I and J to
be relatively small, in which case simulating the differential equations is generally
quicker than simulating the underlying queueing model.
3.6 More Complex Centralized Strategies
One would expect that a more sophisticated strategy for dealing with the old load
information might yield better performance. For instance, if the system uses the load
information on the bulletin board to develop an estimate for the current load, this
estimate may be more accurate than the information on the board itself. In this
subsection, we briefly consider more complex strategies that attempt to estimate the
current queue length and gauge their performance. These strategies require significant
centralization, in that all incoming tasks must have access to the complete bulletin
board. However, we believe these strategies are practical for systems of reasonable
size (hundreds of processors), and hence are worth examining.
Our model is still that the bulletin board is updated every T seconds. Our first
proposed strategy requires that the arrival rate to the system and the entire composition
of the bulletin board be known to the incoming tasks; also, tasks need to know
the time since the last update. The idea of the strategy is to use our knowledge of the
arrival rate to calculate the expected number of tasks at the servers, and then choose
a server with the smallest expected load uniformly at random. We describe a strategy
that approximates this one closely, and has the advantage that the underlying
calculations are quite simple.
In this proposed strategy, which we call the time-based strategy we split each
phase of T seconds into smaller subintervals; in a subinterval
choose a server randomly from all servers with load at most k. The division of the
phase is inductively determined by the loads at the beginning of the phase, which is
information available on the bulletin board. At time 0, tasks choose from all servers
with load 0 posted on the board (if any exist). Hence t Customers begin also
choosing from servers with load 1 when the expected number of arrivals to servers of
load 0 has been 1, so that
Similarly, customers beginning choosing from servers with load at most k when the
expected number of arrivals to servers of load 1, or at
Intuitively, this strategy attempts to equalize the load at the servers in the natural
way.
A limiting system, given by a series of differential equations, can be used to
model this system. The equations are entirely similar to (1) and (2), except that the
expression for q i (t) changes depending on the subinterval t k . (We leave the remaining
work of the derivation to the reader.)
In our second proposed strategy, which we call the record-insert strategy we allow
the tasks to update the global bulletin board when they choose a server. That is,
every time a new task enters the system load for the server on the bulletin board
is incremented, but deletions are not recorded until the board is updated (every T
seconds). Tasks choose a queue uniformly at random from those with the smallest
load on the board. 2 This strategy may be feasible when the tasks use a centralized
system for placement, but there is a large dealy for servers to update load information.
This strategy is essentially the one used to solve the problem of herding behavior in
the TranSend system mentioned previously [8, 9].
Again, a limiting system given by a family of differential equations can be model
this system. We still use P i;j to represent the fraction of queues with load i on the
bulletin and j at the queue; however, the load at the board is now incremented on
arrival. The resulting are again similar to (1) and (2) with this difference:
dP i;0 (t)
dt
dP 0;j (t)
dt
dP i;j (t)
dt
Also, the expression for q i (t) becomes more complicated. Now q i (t) is zero unless
i is the smallest load apparent in the system. Because the smallest load changes
over time, the system will have discontinuous behavior. Simulations demonstrate
that these strategies can perform substantially better than choosing two when n is
reasonably large and T grows large. For example, see Figure 4, which is demonstrative
of several basic principles. As one might expect, record-insert does better than time-
based, demonstrating the power of the tasks being able to update an actual centralized
We note that performance improves slightly if the tasks break ties in some fixed order, such as
by machine index; in this case, for sufficiently long updates T , the strategy becomes a round-robin
scheme. However, this model cannot be easily described by a limiting system.
Update every T seconds261 0
Update interval T
Average
Time
Choices
Time-Based
Record-Insert
Figure
4: Centralized strategies can perform better.
bulletin board directly. However, choosing the shortest of two random servers still
performs reasonably well in comparison, demonstrating that in distributed settings
where global information may be difficult to maintain or the arrival rate is not known
in advance it remains a strong choice.
Continuous Update
The periodic update system is just one possible model for old information; we now
consider another natural model for distributed environments. In a continuous up-date
system, the bulletin board is updated continuously, but the board remains T
seconds behind the true state at all times. Hence every incoming task may use load
information from T seconds ago in making their destination decision. This model
corresponds to a situation where there is a transfer delay between the time incoming
jobs determine which processor to join and the time they join.
We will begin by modeling a similar scenario. Suppose that each task, upon entry,
sees a billboard with information with some time X ago, where X is an exponentially
distributed random variable with mean T , and these random variables are independent
for each task. We examine this model, and later consider what changes are
necessary to replace the random variable X by a constant T .
Modeling this system appears difficult, because it seems that we have to keep
track of the past. Instead, we shall think of the system as working as follows: tasks
first enter a waiting room, where they obtain current load information about queue
lengths, and immediately decide upon their destination according to the appropriate
strategy. They then wait for a time X that is exponentially distributed with mean
T and independent among tasks. Note that tasks have no information about other
tasks in the waiting room, including how many there are and their destinations. After
their wait period is finished, they proceed to their chosen destination; their time in
the waiting room is not counted as time in the system. We claim that this system
is equivalent to a system where tasks arrive at the servers and choose a server based
on information from a time X ago as described. The key to this observation is to
note that if the arrival process to the waiting room is Poisson, then the exit process
from the waiting room is also Poisson, as is easily shown by standard arguments.
Interestingly, another interpretation of the waiting room is as a communication delay,
corresponding to the time it takes a task from a client to move to a server. This model
is thus related to similar models in [12].
The state of the system will again be represented by a collection of numbers for
a set of ordered pairs. In this case, P i;j will be the fraction of servers with j current
tasks and i tasks sitting in the waiting room; similarly, we shall say that a server is in
state (i; j) if it has j tasks enqueued and i tasks in the waiting room. In this model
we let q j (t) be the arrival rate of tasks into the waiting room that choose servers with
current load j as their destination. The expression for q j will depend on the strategy
for choosing a queue, and can easily be determined, as in Section 3.1.
To formulate the differential equations, consider first a server with in state (i; j),
where 1. The queue can leave this state in one of three ways: a task can
complete service, which occurs at rate new task can enter the waiting room,
which occurs at rate q j (t); or a message can move from the waiting room to the server,
which (because of our assumption of exponentially distributed waiting times) occurs
at rate i
. Similarly one can determine three ways in which a server can enter (i; j).
The following equations include the boundary cases:
dP 0;0 (t)
dt
dP 0;j (t)
dt
dP i;0 (t)
dt
dP i;j (t)
dt
4.1 The Fixed Point
Just as in the periodic update model the system converges to a fixed cycle, simulations
demonstrate that the continuous update model quickly converges to a fixed point,
where dP i;j (t)
dt
We therefore expect that in a suitably large finite system,
in equilibrium the distribution of server states is concentrated near the distribution
given by the fixed point. Hence, by solving for the fixed point, one can the estimate
system metrics such as the expected time in the queue (using, for example, Little's
Law). The fixed point can be approximated numerically by simulating the differential
equations, or it can be solved for using the family of equations dP i;j (t)
dt
In fact,
this approach leads to predictions of system behavior that match simulations quite
accurately, as we will detail in Section 4.3.
Using techniques discussed in [16, 17], one can prove that, for all the strategies we
consider here, the fixed point is stable, which informally means that the trajectory
remains close to its fixed point (once it gets close). We omit the straightforward proof
here. Our simulations suggest that in fact the limiting system converges exponentially
to its fixed point; that is, that the distance between the fixed point and the trajectory
decreases geometrically quickly over time. (See [16, 17].) Although we can prove this
for some special cases, proving exponential convergence for these systems in general
remains an open question.
4.2 Continuous Update, Constant Time
In theory, it is possible to extend the continuous update model to approximate the
behavior of a system where the bulletin board shows load information from T seconds
ago; that is, where X is a constant random variable of value T . The customer's
time in the waiting room must be made (approximately) constant; this can be done
effectively using Erlang's method of stages. The essential idea is that we replace our
single waiting room with a series of r consecutive waiting rooms, such that the time
a task spends in each waiting room is exponentially distributed with mean T=r. The
expected time waiting is then T , and the variance decreases with in the limit as
r !1, it is as though the waiting time is constant. Taking a reasonable sized r can
lead to a good approximation for constant time. Other distributions can be handled
similarly. (See, e.g., [17].)
In practice, this model is difficult to use, as the state of a server must now be
Board T seconds behind
Update interval T
Average
Choices
3 Choices
Shortest
Figure
5: Each task sees the loads from T seconds ago.
represented by an r 1-dimensional vector that keeps track of the queue length and
number of customers at each of the r waiting rooms. Hence the number of states to
keep track of grows exponentially in r. It may still be possible to use this approach in
some cases, by truncating the state space appropriately; however, for the remainder,
we will consider this model only in simulations.
4.3 Simulations
As in Section 3.4, we present results from simulating the actual queueing systems.
We have chosen the case of 0:9 as a representative case for
illustrative purposes. As one might expect, the limiting system proves more accurate
as n increases, and the differences among the strategies grow more pronounced with
the arrival rate.
We first examine the behavior of the system when X, the waiting room time, is a
fixed constant T . In this case the system demonstrates behavior remarkably similar to
the periodic update model, as shown in Figure 5. For example, choosing the shortest
server performs poorly even for small values of T , while two choices performs well
over a broad range for T .
When we consider the case when X is an exponentially distributed random variable
with mean T , however, the system behaves radically differently (Figure 6). All
three of the strategies we consider do extremely well, much better than when X is
Board Z seconds behind, Z exponential with mean T
Update interval T
Average
Choices
3 Choices
Shortest
Figure
Each task sees the loads from X seconds ago, where the X are independent
exponential random variables with mean T .
the fixed constant T . We found that the deviation between the results from the simulations
and the limiting system are very small; they are within 1-2% when two or
three choices are used, and 5-20% when tasks choose the shortest queue, just as in
the case of periodic updates (Section 3.4).
We suggest an interpretation of this surprising behavior, beginning by considering
when customers choose the shortest queue. In the periodic update model, we saw
that this strategy led to "herd behavior", with all tasks going to the same small set
of servers. The same behavior is evident in this model, when X is a fixed constant;
it takes some time before entering customers become aware that the system loads
have changed. In the case where X is randomly distributed, however, customers
that enter at almost the same time may have different views of the system, and thus
make different choices. Hence the "herd behavior" is mitigated, improving the load
balancing. Similarly, performance improves with the other strategies as well.
We justify this interpretation by considering other distributions for X; the cases
where X is uniformly distributed on [T=2; 3T=2] and on [0; 2T ] are given in Figures 7
and
Figures
8. Both perform noticeably better than the case where X is fixed at
T . That the larger interval performs dramatically better suggests that it is useful to
have some tasks that get very accurate load information (i.e, where X is close to 0);
this also explains the behavior when X is exponentially distributed.
Board Z seconds behind, Z uniform on [T/2,3T/2]
Update interval T
Average
Choices
3 Choices
Shortest
Figure
7: Each task sees the loads from X seconds ago, where the X are independent
uniform random variables from [T=2; 3T=2].
Board Z seconds behind, Z uniform on [0,2T]
Update interval T
Average
Choices
3 Choices
Shortest
Figure
8: Each task sees the loads from X seconds ago, where the X are independent
uniform random variables from [0; 2T ].
This setting demonstrates how randomness can be used for symmetry breaking. In
the periodic update case, by having each task choose from just two servers, one introduces
asymmetry. In the continuous update case, one can also introduce asymmetry
by randomizing the age of the load information.
This setting also demonstrates the danger of assuming that a model's behavior
does not vary strongly if one changes underlying distributions. For example, in many
cases in queueing theory, results are proven for models where service times are exponentially
distributed (as these results are often easier to obtain), and it is assumed
that the behavior when service times are constant (with the same mean) is similar.
In some cases there are even provable relationships between the two models (see, for
example, [14, 20]). In this case, however, changing the distribution of the random
variable X causes a dramatic change in behavior.
5 Individual updates
In the models we have considered thus far, the bulletin board contains load information
from the same time t for all the servers. It is natural to ask what happens
when servers update their load information at different times, as may be the case in
systems where servers individually broadcast load information to clients. In an individual
update system, the servers update the load information at the bulletin board
individually. For convenience we shall assume the time between each update for every
server is independent and exponentially distributed with mean T . Note that, in this
model, the bulletin board contains only the load information and does not keep track
of when the updates have occurred.
The state of the system will again be represented by a collection of ordered pairs.
In this case, P i;j will be the fraction of servers with true load j but load i posted
on the bulletin board. We let q i (t) be the arrival rate of tasks to servers with load
i posted on the bulletin board; the expression for q i will depend on the strategy for
choosing a queue. We let S i (t) be the total fraction of servers with true load i at time
t, regardless of the load displayed on the bulletin board; note S i
The true load of a server and its displayed load on the bulletin board match
when an update occurs. Hence when considering how P i;i changes, there will a term
corresponding to when one of the fraction S i of servers with load i generates an update.
The following equations are readily derived in a similar fashion as in previous sections.
dP i;0 (t)
dt
dP i;j (t)
dt
Individual updates every Z seconds, Z
exponentially distributed with mean T
Update interval T
Average
Choices
3 Choices
Shortest
Figure
9: Each server updates the board every X seconds, where X is exponentially
distributed with mean T .
dP 0;0 (t)
dt
dP i;i (t)
dt
As with the continuous update model, in simulations this model converges to
a fixed point, and one can prove that this fixed point is stable. Qualitatively, the
behavior appears similar to the periodic update model, as can be seen in Figure 9.
6 Competitive Scenarios
We have assumed thus far in our models that all tasks adopt the same underlying
strategy, and the goal has been to reduce the expected time for all tasks. In a
more competitive environment, tasks may instead independently act in their own
best interests, and it is necessary to consider the effects of anti-social, competitive
clients who may not follow the proposed universal strategy.
We consider briefly a specific example. Suppose we have a system where each
task is supposed to choose from the shortest of two randomly chosen servers. In this
case, an anti-social task may attempt to improve its own situation by obtaining the
entire bulletin board and proceeding to a server with the smallest posted load. Do
such tasks do better than other tasks? If so, in a competitive environment tasks have
little motivation to follow the suggested strategy.
We study the problem by examining the situation where each customer adopts
the anti-social strategy with probability p. With such a model it is possible to set
up a corresponding limiting system, since each task's strategy can be expressed as a
probabilistic mixture of two strategies; for example, in this case,
Some simulations where all customers see load information from exactly T seconds
ago are revealing. Table 1 provides numerical results based on simulations for
servers. When T is small or the fraction p of competitive customers is
sufficiently small, competitive tasks reduce their average time by acting against the
standard strategy. In cases where choosing two servers performs poorly, introducing
competitive customers can actually reduce the average time for everyone, although
more often anti-social customers do better at the expense of other tasks. For larger
values of T or p, system performance degrades for all customers, and the average time
anti-social customers spend in the system can grow much larger than that of other
customers. In this sense, tasks are motivated not to choose the shortest, for if too
many do so, their average time in the system will be larger than those that do not.
The situation becomes even more interesting, however, if the measure of performance
in not the average time in the system, but a more complicated measure. For
example, it may be important for some tasks to finish by a certain deadline, and in
this case the goal is to maximize the probability that it finishes by its deadline. Our
simulations have also shown that in the model described above, even when p and T
are such that choosing the server with the shortest posted queue increases the average
time for a task, the variance in the time in the system of customers who adopt this
strategy can be lower than other customers (Table 1). Intuitively, this is probably
because some customers that make only two choices will be quite unlucky, and choose
two very long queues. Hence, tasks with deadlines may be motivated to try another
strategy, even though it appears worse in terms of the average time in the system.
We believe there are many open questions to consider in this area, and we discuss
them further in the conclusion.
Avg. Time Avg. Time Avg. Time Variance Variance Variance
All Tasks 2 Choices Shortest All Tasks 2 Choices Shortest
Table
1: Comparing simulation results for anti-social tasks (who choose the shortest)
against those that choose two, for
7 Open Questions and Conclusions
We have considered the question of how useful old information is in the context of
load balancing. In examining various models, we have found a surprising rule of
thumb: choosing the least loaded of two random choices according to the old load
information performs well over a large range of system parameters and is generally
better than similar strategies, in terms of the expected time a task spends in the
system. We have also seen the importance of using some randomness in order to
prevent customers from adopting the same behavior, as demonstrated by the poor
performance of the strategy of choosing the least loaded server in this setting.
We believe that there is a great deal more to be done in this area. Generally, we
would like to see these models extended and applied to more realistic situations. For
example, it would be interesting to consider this question with regard to other load
balancing scenarios, such as in virtual circuit routing, or with regard to metrics other
than the expected time in the system, such as in a system where tasks have deadlines.
A different theoretical framework for these problems, other than the limiting system
approach, might be of use as well. In particular, it would be convenient to have a
method that yields tighter bounds in the case where n, the number of servers, is
small. Finally, the problem of handling more realistic arrival and service patterns
appears quite difficult. In particular, it is well known that when service distributions
are heavy-tailed, the behavior of a load balancing system can be quite different than
when service distribution are exponential; however, we expect our rule of thumb
performs well in this scenario as well.
An entirely different flavor of problems arises from considering the problem of
old information in the context of game theory. We have generally assumed in our
models that all tasks adopt the same underlying strategy, and the goal has been to
reduce the expected time for all tasks. In a more competitive environment, tasks
may instead independently act in their own best interests, and hence in Section 6 we
considered the effects of anti-social clients who may not follow the proposed strategy.
More generally, we may think of these systems as multi-player games, which leads to
several interesting questions: if each task is an individual player, what is the optimal
strategy for a self-interested player (i.e., a player whose only goal is to minimize their
own expected time in the system, say)? How easily can this strategy be computed
on-line? Is this strategy different than the optimal strategy to minimize the average
expected time, and if so, how? Are there simple stable strategies, in which no player
is motivated to deviate from the strategy for their own gain?
Acknowledgments
The author thanks the many people at Digital Systems Research Center who offered
input on this work while it was in progress. Special thanks go to Andrei Broder, Ed
Lee, and Chandu Thekkath for their many helpful suggestions.
--R
"Analysis of Simple Algorithms for Dynamic Load Balancing"
"Making Commitments in the Face of Uncertainty: How to Pick a Winner Almost Every Time"
"Balanced Allocations"
"Adaptive load sharing in homogeneous distributed systems"
"A comparison of receiver-initiated and sender-initiated adaptive load sharing"
Characterization and Convergence
Private communication.
"Cluster- Based Scalable Network Services"
"Efficient PRAM Simulation on a Distributed Memory Machine"
Approximation of Population Processes
"Analysis of the Effects of Delays on Load Sharing"
"Adaptive Load Sharing in Heterogeneous Distributed Systems"
"Constant Time per Edge is Optimal on Rooted Tree Net- works"
"Load Balancing and Density Dependent Jump Markov Pro- cesses"
"The Power of Two Choices in Randomized Load Balancing"
"On the Analysis of Randomized Load Balancing Schemes"
"Constant Time per Edge is Optimal on Rooted Tree Net- works"
Large Deviations for Performance Analysis
"The Efficiency of Greedy Routing in Hypercubes and Butterflies"
"The Effect of Communication Delays on the Performance of Load Balancing Policies in Distributed Systems"
"On the Optimal Assignment of Customers to Parallel Servers"
"Deciding Which Queue to Join: Some Counterexamples"
"Optimality of the Shortest Line Discipline"
"Differential Equations for Random Processes and Random Graphs"
"Queueing System with Selection of the Shortest of Two Queues: an Asymptotic Approach"
--TR
--CTR
Alok Shriram , Anuraag Sarangi , Avinash S., ICHU model for processor allocation in distributed operating systems, ACM SIGOPS Operating Systems Review, v.35 n.3, p.16-21, July 1 2001
Avinash Shankaranarayanan , Frank Dehne , Andrew Lewis, A template based static coalition protocol: a3P viGrid, Proceedings of the 2006 Australasian workshops on Grid computing and e-research, p.55-62, January 16-19, 2006, Hobart, Tasmania, Australia
Nelly Litvak , Uri Yechiali, Routing in Queues with Delayed Information, Queueing Systems: Theory and Applications, v.43 n.1-2, p.147-165, January-February
K. S. Ho , H. V. Leong, Improving the scalability of the CORBA event service with a multi-agent load balancing algorithm, SoftwarePractice & Experience, v.32 n.5, p.417-441, 25 April 2002
Hanhua Feng , Vishal Misra , Dan Rubenstein, Optimal state-free, size-aware dispatching for heterogeneous M/G/-type systems, Performance Evaluation, v.62 n.1-4, p.475-492, October 2005
Giovanni Aloisio , Massimo Cafaro , Euro Blasi , Italo Epicoco, The Grid Resource Broker, a ubiquitous grid computing framework, Scientific Programming, v.10 n.2, p.113-119, April 2002
Adam Kirsch , Michael Mitzenmacher, Simple summaries for hashing with choices, IEEE/ACM Transactions on Networking (TON), v.16 n.1, p.218-231, February 2008
Mauro Andreolini , Michele Colajanni , Ruggero Morselli, Performance study of dispatching algorithms in multi-tier web architectures, ACM SIGMETRICS Performance Evaluation Review, v.30 n.2, September 2002
Michael E. Houle , Antonios Symvonis , David R. Wood, Dimension-exchange algorithms for token distribution on tree-connected architectures, Journal of Parallel and Distributed Computing, v.64 n.5, p.591-605, May 2004
Changxun Wu , Randal Burns, Handling Heterogeneity in Shared-Disk File Systems, Proceedings of the ACM/IEEE conference on Supercomputing, p.7, November 15-21,
Walfredo Cirne , Francine Berman, When the Herd Is Smart: Aggregate Behavior in the Selection of Job Request, IEEE Transactions on Parallel and Distributed Systems, v.14 n.2, p.181-192, February
Raffaella Grieco , Delfina Malandrino , Vittorio Scarano, SEcS: scalable edge-computing services, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico
Michael Mitzenmacher, The Power of Two Choices in Randomized Load Balancing, IEEE Transactions on Parallel and Distributed Systems, v.12 n.10, p.1094-1104, October 2001
Mauro Andreolini , Sara Casolari, Load prediction models in web-based systems, Proceedings of the 1st international conference on Performance evaluation methodolgies and tools, October 11-13, 2006, Pisa, Italy
Yu-Kwong Kwok , Lap-Sun Cheung, A new fuzzy-decision based load balancing system for distributed object computing, Journal of Parallel and Distributed Computing, v.64 n.2, p.238-253, February 2004
Michael Dahlin, Interpreting Stale Load Information, IEEE Transactions on Parallel and Distributed Systems, v.11 n.10, p.1033-1047, October 2000
Changxun Wu , Randal Burns, Tunable randomization for load management in shared-disk clusters, ACM Transactions on Storage (TOS), v.1 n.1, p.108-131, February 2005
Yu, The state of the art in locally distributed Web-server systems, ACM Computing Surveys (CSUR), v.34 n.2, p.263-311, June 2002 | large deviations;old information;load balancing;queuing theory;stale information |
331217 | Efficient Algorithms for Block-Cyclic Array Redistribution Between Processor Sets. | AbstractRun-time array redistribution is necessary to enhance the performance of parallel programs on distributed memory supercomputers. In this paper, we present an efficient algorithm for array redistribution from cyclic(x) on $P$ processors to cyclic(Kx) on $Q$ processors. The algorithm reduces the overall time for communication by considering the data transfer, communication schedule, and index computation costs. The proposed algorithm is based on a generalized circulant matrix formalism. Our algorithm generates a schedule that minimizes the number of communication steps and eliminates node contention in each communication step. The network bandwidth is fully utilized by ensuring that equal-sized messages are transferred in each communication step. Furthermore, the time to compute the schedule and the index sets is significantly smaller. It takes $O(max(P,Q))$ time and is less than 1 percent of the data transfer time. In comparison, the schedule computation time using the state-of-the-art scheme (which is based on the bipartite matching scheme) is 10 to 50 percent of the data transfer time for similar problem sizes. Therefore, our proposed algorithm is suitable for run-time array redistribution. To evaluate the performance of our scheme, we have implemented the algorithm using C and MPI on an IBM SP2. Results show that our algorithm performs better than the previous algorithms with respect to the total redistribution time, which includes the time for data transfer, schedule, and index computation. | Introduction
Many High Performance Computing (HPC) applications, including scientific computing and signal
processing, consist of several stages [2, 10, 22]. Examples of such applications include the multi-dimensional
Fast Fourier Transform, the Alternative Direction Implicit (ADI) method for solving
two-dimensional diffusion equation, and linear algebra solvers. While executing these applications
on a distributed memory supercomputer, data distribution is needed for each stage to reduce the
performance degradation due to remote memory accesses. As the program execution proceeds from
one stage to another, the data access patterns and the number of processors required for exploiting
the parallelism in the application may change. These changes usually cause the data distribution
in a stage to be unsuitable for the subsequent stage. Data redistribution relocates the data in the
distributed memory to reduce the remote access overheads. Since the parameters of redistribution
are generally unknown at compile time, run-time data redistribution is necessary. However, the
cost of redistribution can offset the performance benefits that can be achieved by the redistribution.
Therefore, run-time redistribution must be implemented efficiently to ensure overall performance
improvement.
Array data are typically distributed in a block-cyclic pattern onto a given set of processors. The
block-cyclic distribution with block size x is denoted as cyclic(x). A block contains x consecutive
array elements. Blocks are assigned to processors in a round-robin fashion. Other distribution
patterns, cyclic and block distribution, are special cases of the block-cyclic distribution. In gen-
eral, the block-cyclic array redistribution problem is to reorganize an array from one block-cyclic
distribution to another, i.e., from cyclic(x) to cyclic(y). An important case of this problem is
redistribution from cyclic(x) to cyclic(Kx) which arises in many scientific and signal processing
applications. This type of data redistribution can occur within a same processor set, or between
different processor sets.
Data redistribution from a given initial layout to a final layout consists of four major steps
- index computation, communication schedule, message packing and unpacking, and finally data
transfer. All these four steps contribute to the total array redistribution cost. We briefly explain
these four costs associated with data redistribution:
of the array elements that belong to it and computes the local memory location (local index) of
each array element. This local index is used to pack array elements into a message. Similarly, each
destination processor determines the source processor indices of received messages and computes
their local indices to find out the location where the received message is to be stored. The total
time to compute these indices is denoted as index computation cost.
Schedule Computation Cost: The communication schedule specifies a collection of sender-receiver
pairs for each communication step. Since in each communication step, a processor can
send at most one message and a processor can receive at most one message, careful scheduling
is required to avoid contention while minimizing the number of communication steps. Time to
compute this communication schedule can be significant. Reducing this cost is an important criteria
in performing run-time redistribution.
Message Packing/Unpacking Cost: At each sender, a message consists of words from different
memory locations which need to be gathered in a buffer in the sending node. Typically, this requires
a memory read and a memory write operation to gather the data to form a compact message in
the buffer. The time to perform this data gathering at the sender is the message packing cost.
Similarly, at the receiving side, each message is to be unpacked and data words need to be stored
in appropriate memory locations.
Data Transfer Cost: The data transfer cost for each communication step consists of start-up cost
and transmission cost. The start-up cost is incurred by software overheads in each communication
operation. The total start-up cost can be reduced by minimizing the number of message transfer
steps. The transmission cost is incurred in transferring the bits over the network and depends on
the network bandwidth.
Table
1 summarizes the key features of the well known data distribution algorithms in the
literature [4, 13, 14, 15, 21]. All of the known algorithms ignore one or more of the above costs.
Some schemes focus only on efficient index set computation and completely ignore scheduling the
communication events. Based on the index of a block, these schemes focus on finding its destination
processor and generating messages for the same destination. Communication scheduling is not
considered. These lead to node contention in performing the communication. This inturn leads to
higher data transfer costs as some nodes incur additional delays. Other schemes eliminate node
contention by explicitly scheduling the communication events [3, 17, 23]. Although the schemes
in [3, 17, 23] have an efficient scheduling algorithm, these were designed for data redistribution
on the same processor set. For redistribution between different processor sets, the Caterpillar
algorithm was proposed in [11]. It uses a simple round robin schedule to avoid node contention.
Key Features
Schedule & Index Computation Communication
PITFALLS
[14]
. No communication scheduling
. Index computation using a line segment
. Node contention occurs
. Does not minimize the transmission cost
. No communication scheduling
. Efficient index computation
. Source and destination sets are same
. Node contention occurs
. Does not minimize the transmission cost
Caterpillar
. Simple scheduling algorithm
. Index computation by scanning the array
segments
. No node contention
. Does not minimize the transmission cost
and the number of communication steps
Bipartite
Matching
Scheme
. Large schedule computation overhead
Schedule computation time:
. No node contention
. Stepwise strategy: minimizes the number
of communication steps
. Greedy strategy: minimizes the transmission
cost
Our Scheme . Fast schedule and index computations
Schedule computation time:
. No node contention
. Minimizes the number of communication
steps and the data transfer cost
Table
1: Comparison of various schemes for array redistribution.
However, this algorithm does not fully utilize the network bandwidth i.e., the size of the data sent
by the nodes in a communication step varies from node to node. This leads to increased data
transfer cost. The schemes in [5] reduce the data transfer cost, however, the schedule computation
cost is significant. The bipartite graph matching used in [5] takes O((P On a state-
of-the-art workstation, this time is in the range of 100's of msecs for P and Q of interest. For
problems of interest, the schedule computation cost is larger than the data transfer cost. The
algorithm in [5] optimizes the data transfer cost and the number of communication steps for the
non all-to-all communication case (which is one of the three cases that occur in performing the
redistribution considered here). The algorithm in [5] does not optimize the data transfer cost for
the all-to-all communication case with different message sizes. To optimize the data transfer cost,
it is necessary that the transferred messages are of equal size in each communication step.
In this paper, we propose a novel and efficient algorithm for data redistribution from cyclic(x)
on P processors to cyclic(Kx) on Q processors. Our algorithm uses optimal number of communication
steps and fully utilizes the network bandwidth in each step. The communication schedule
is determined using a generalized circulant matrix framework. The schedule computation cost is
Q)). Our implementations show that the schedule computation time is in the range of
100's of -secs when P and Q are in the range 50-100. Each processor computes its own index set
and its communication schedule only using a set of equations derived from our generalized circulant
matrix formulation. Our experimental results show that the schedule computation time is negligible
compared with the data transfer cost for array sizes of interest. The message packing/unpacking
cost is the same as that of any scheme that generates an optimal communication schedule. Thus,
our scheme minimizes the total time for data redistribution. This makes our scheme attractive for
run-time as well as compile-time data redistribution.
Our techniques can be used for implementing scalable redistribution libraries, for implementing
directive in HPF [1], and for developing parallel algorithms for supercomputer
applications. In particular, these techniques lead to efficient distributed corner turn operation, a
communication kernel needed in parallelizing signal processing applications [26, 27].
Our redistribution scheme has been implemented using MPI and C. It can be easily ported
to various HPC platforms. We have performed several experiments to illustrate the improved
performance compared with the state-of-the-art. The experiments were performed to determine
the data transfer, schedule and index computation costs. In one of these experiments, we used
64 processors on an IBM SP2 which were partitioned into 28 source processors and 36 destination
processors. The expansion factor was set to 14. The array size was varied from 2.26 Mbytes to 56.4
Mbytes. Compared with the Caterpillar algorithm, our data transfer times were lower. The ratio
of data transfer time of our algorithm to that of the Caterpillar algorithm was between 49.2% and
53.7%. The schedule computation time of the proposed algorithm is much less than that of the
bipartite matching scheme [5]. For P and Q - 64, the schedule computation time of the bipartite
matching scheme is 100's of msecs, while that of our algorithm is only 100's of -secs. For example,
when the schedule computation time using the bipartite matching
scheme is 133.2 msecs while the time using our algorithm is 178.6 -secs
The rest of this paper is organized as follows. Section 2 explains our table-based framework. It
also discusses the generalized circulant matrix formalism for deriving conflict free communication
schedules. Section 3 explains our redistribution algorithm and index computation. Section 4 reports
our experimental results on the IBM SP-2. Concluding remarks are made in Section 5.
(a) Array A, N=48
(c) CYCLIC(4) on Q=6 processors
(b) CYCLIC(2) on P=3 processors
Figure
1: Block-cyclic redistribution from array point of view: (a) the array of elements, (b)
on P processors, (c) from cyclic(x) on P processors to cyclic(Kx) on Q processors. In
this example, 2.
Our Approach to Redistribution
In this section, we present our approach to block-cyclic redistribution problem. In subsection 2.1,
we discuss two views of redistribution and illustrate the concept of a superblock. In the following
subsection, we explain our table-based framework for redistribution using the destination processor
table and column and row reorganizations. In subsection 2.3, we discuss the generalized circulant
matrix formalism which allows us to compute communication schedule efficiently.
2.1 Array and processor points of view
The block-cyclic distribution, cyclic(x), of an array is defined as follows: given an array with N
elements, P processors, and a block size x, the array elements are partitioned into contiguous
blocks of x elements each. The i th block, b i , consists of array elements whose indices vary from ix
to (i
x . These blocks are distributed onto
processors in a round-robin fashion. Block b i is assigned to processor j, P j , where
In this paper, we study the problem of redistributing from cyclic(x) on P processors to cyclic(Kx)
on Q processors, which is denoted as ! x from the array point
of view. The elements of the array are shown along a single horizontal axis. The processor indices
are marked above each block. For the redistribution ! x Q), a periodicity can be found in
on P processors CYCLIC(4) on Q processors
(c) Initial Distribution Table D i (f) Final Distribution Table D f20
I
b 22
b 9
I
b 22
b 9
(a) Initial layout
(b)
(d) Final layout
Figure
2: Block-cyclic redistribution from cyclic(x) on P processors to cyclic(Kx) on Q processors
from processor point of view. In this example, 2.
the block movement pattern. For example, in Figure 1, b 0 , b 3 , b 6 , and b 9 , which are initially assigned
to P 0 , are moved to Q respectively. After that, b 12 in P 0 is moved to Q 0
again. We find that the communication pattern between b 0 and b 11 is repeated on other blocks.
Such a collection of blocks is called as a superblock. The period of this block movement pattern is
KQ), and is the size of the superblock. In Figure 1, superblock size is lcm(3; 2 \Delta In
the next superblock, blocks b 12 to b 23 are moved in the same fashion.
From the processor point of view, the block-cyclic distribution can be represented by a 2-
dimensional table. Each column corresponds to a processor and each row index is a local block
index. Each entry in the table is a global block index. Therefore, element (i; j) in the table
represents the i th local block of the j th processor. Figure 2 shows the example of ! 2 (3; 2; 6) from
I
I
global
block
index
destination
processor
index
Figure
3: An example of destination processor table T.
the processor point of view. Blocks are distributed on the table in a round-robin fashion. The
table corresponding to source processors is denoted as initial layout representing which blocks are
initially assigned to which source processors. Similarly, the final layout represents which blocks are
assigned to which destination processors. Our problem is to redistribute the blocks from initial
layout to final layout. These layouts are shown in Figure 2(a) and (d) respectively.
The initial layout can be partitioned into collections of rows of size L
Similarly, the final layout can be partitioned into disjoint collections of rows; each collection having
rows. Note that each collection corresponds to a superblock. Blocks, which
are located at the same relative position within a superblock, are moved in the same way during the
redistribution. These blocks can be transferred in a single communication step. The MPI derived
data type can handle these blocks as a single block. Without loss of generality, we will consider only
the first superblock in the following to illustrate our algorithm. We refer to the tables representing
the indices of the blocks within the first superblock in the initial (final) layout as initial distribution
table D i (final distribution table D f ). These are shown in Figure 2(c) and (f), respectively. The
cyclic redistribution problem essentially involves reorganizing blocks within each superblock from
an initial distribution table D i to a final distribution table D f .
2.2 A Table-based framework for redistribution
Given the redistribution parameters, P , K, and Q, each block's location in D i and D f can be
determined. Through redistribution, each block moves from its initial location in D i to the final
location in D f . Thus, the processor ownership and the local memory location of each block are
changed by redistribution. This redistribution can be conceptually considered as a table conversion
process from D i to D f , which can be decomposed into independent column and row reorganizations.
Column
Row Reorganization
Reorganization Column
Reorganization
I
I
9 7 11
I
I
Figure
4: Table conversion process for redistribution.
In a column reorganization, blocks are rearranged within a column of the table. This is therefore
a local operation within a processor's memory. In a row reorganization, blocks within a row are
rearranged. This operation therefore leads to a change in ownership of the blocks, and requires
interprocessor communication.
The destination processor of each block in the initial distribution table is determined by the
redistribution parameters and its global block index. A send communication events table is constructed
by replacing each block index in the initial distribution table with its destination processor
index as shown in Figure 3. This is denoted as destination processor table (dpt) T. The (i; th entry
of T is the destination processor index of i th local block in source processor j and 0 -
considers only one superblock. It is a L s \Theta P matrix. Each row corresponds to a communication
step. In our algorithm, during a communication step, a processor sends data to atmost one
destination processor. If Q - P , atmost P processors in the destination processor set can receive
data and the other destination processors remain idle during that communication step. Therefore,
each communication step can have at most P communicating pairs. On the other hand, if
only Q destination processors can receive data at a time. The maximum number of communicating
pairs in a communication step is min(P; Q). Without loss of generality, in the following discussion
we assume that Q - P .
Figure
4 shows our table-based framework for redistribution. To convert the initial distribution
table D i to the final distribution table D f , (dpt) T can be used. But, the use of T itself as
a communication schedule is not efficient. It leads to node contention, since several processors
try to send their data to the same destination processor in a communication step. For example,
in
Figure
4, during step 0, both source processors 0 and 1 try to communicate with destination
processor 0. However, if every row of T consists of P distinct destination processor indices among
node contention can be avoided in each communication step. This is the motivation
for the column reorganizations.
To eliminate node contention, the dpt T is reorganized by column reorganizations. The reorganized
table is called the send communication schedule table, S. In section 3, we discuss how these
reorganizations are performed. S is a L s \Theta P matrix as well. Each entry of S is a destination
processor index and each row corresponds to a contention-free communication step. To maintain
the correspondence between D i and T, the same set of column reorganizations is applied to D i
which results in a distribution table, D 0
i corresponding to S. In a communication step, blocks in
a row of D 0
are transferred to their destination processors specified by the corresponding entries
in S. Referring to Figure 4, in the first communication step, source processors 0, 1 and 2 transfer
blocks 0, 4 and 2 to destination processors 0, 2 and 1 respectively as specified by S. Such a step
is called row reorganization. The distribution table D 0
f corresponding to the received blocks in
destination processors is reorganized into the final distribution table D f by another set of column
reorganizations. (For this example, we do not need this operation.) The received blocks are then
stored in the memory locations of the destination processors. The key idea is to choose a S such
that the required row reorganizations(communication events) can be performed efficiently and it
supports easy-to-compute contention-free communication scheduling.
So far, we have discussed a redistribution problem from cyclic(x) on P processors to cyclic(Kx)
on Q processors. A dual relationship exists between the problem from cyclic(x) on P processors
to cyclic(Kx) on Q processors and the problem from cyclic(Kx) on P processors to cyclic(x) on
processors. The redistribution from cyclic(Kx) on P processors to cyclic(x) on Q processors
is the redistribution with reverse direction of the redistribution ! x Q). Its send (receive)
communication schedule table is the same as the receive (send) communication schedule table of
Q). Therefore, our scheme for ! x can be extended to the redistribution problem
from cyclic(Kx) on P processors to cyclic(x) on Q processors.
2.3 Communication scheduling using generalized circulant matrix
Our framework for communication schedule performs the local rearrangement of data within each
processor as well as interprocessor communication. The local rearrangement of data, which we call
column reorganization, results in a send communication schedule table S. We will show that for
any P , K and Q, the send communication schedule is indeed a generalized circulant matrix which
avoids node contention.
matrix is a circulant matrix if it satisfies the following properties:
1. If m - n, row row 0 circularly right shifted k times, 0 - k ! m.
2. shifted l times,
Note that the above definition can be extended to block circulant matrices by changing "row"
to "row block".
matrix is a generalized circulant matrix if the matrix can be partitioned
into blocks of size m \Theta n, where such that the resulting
block matrix forms a circulant matrix and each block is either a circulant matrix or a generalized
circulant matrix.
Figure
5 illustrates a generalized circulant matrix. There are two observations about the generalized
circulant matrix: (i) the s blocks along each block diagonal are identical, and (ii) if all the
elements in row 0 are distinct, then in each row all elements are distinct.
We will show that through our approach the destination processor table T is transformed to a
generalized circulant matrix S with distinct elements in each row.
3 Efficient Redistribution Algorithms
Before discussing communication schedule algorithm for redistribution, we classify communication
patterns into 3 classes for the redistribution problem ! x for an alternative formulation
for the cyclic(x) to cyclic(y) problem) according to the following Lemma. Let G denote
KQ).
Generalized Circulant Matrix
(Circulant Matrix)
Submatrix
Figure
5: Generalized circulant matrix.
Lemma 1 The communication pattern induced by ! x (P; K; Q) requires: (i) non all-to-all communication
communication with a fixed message size if K = ffG, where ff is
an integer greater than 0, and (iii) all-to-all communication with different message sizes if G ! K
and K 6= ffG.
Among these three cases, the case of all-to-all processor communication with the same message
size can be optimally scheduled using a trivial round-robin schedule. However, it is non trivial
to achieve the same message size between all pairs of nodes in a communication step for all-to-all
case with different message sizes. Therefore, we focus on the two cases of redistribution requiring
scheduling of non all-to-all communication and all-to-all communication with different message
sizes.
3.1 Non all-to-all communication
Given the redistribution parameters P, Q, and K, we get the L s \Theta P initial distribution table D i and
its dpt T. Let G
and
. In the dpt T, every K th
1 row has a similar
pattern. It has different destination processor indices. We shuffle the rows such that rows having
similar pattern are adjacent resulting in the shuffled dpt T 1 . The shuffled dpt T 1 is divided into Q 1
slices in the row direction,
. It is divided into P 1 slices in the column direction. Now, dpt
can be considered as a K 1 \Theta P 1 block matrix made of Q 1 \Theta G 1 submatrices. This block matrix
is then converted into a generalized circulant matrix by reorganizing blocks in each column of the
block matrix and reorganizing individual columns within a submatrix by appropriate amounts.
This results in a generalized circulant matrix which is our communication schedule matrix S. In
(a) D i (b) D i1 (c) D i '
Figure
Steps of column reorganization.
this procedure, the K identical values in row 0 of the dpt T are distributed to K distinct rows,
and hence, row 0 has distinct values. Since S is a generalized circulant matrix and all the elements
in each row are distinct, we achieve a conflict-free schedule. A rigorous proof of this fact that any
dpt T can be transformed to a generalized circulant matrix using these column reorganizations can
be found in [25]. With this schedule, in each communication step, P source processors transfer an
equal-size message to P distinct destination processors. It ensures that the network bandwidth is
fully utilized. The number of communication steps is also minimized. Therefore the data transfer
cost is minimized. In the above reorganization, an element is moved within its column. So, it does
not incur any interprocessor communication. Figure 6 shows an example where dpt T of ! x (6; 4;
is converted to a generalized circulant matrix form S by column reorganizations. In this example,
3. Figure 6(a) shows the initial distribution table,
Figure
6(d) shows the corresponding dpt T. Rows of D i and T are shuffled, as shown
in
Figure
6(b) and (e). Now we can partition the shuffled tables into submatrices of size 3 \Theta 2.
The diagonalization of submatrices and diagonalization of elements in each submatrix are shown
in
Figure
6(c) and (f). Figure 6(f) is a generalized circulant matrix S which gives a communication
schedule.
While the dpt T is converted to the send communication schedule table S, the same set of
reorganizations are applied to the initial data distribution table D i . It is converted to D 0
i as
shown in Figure 6. It can be expensive to reorganize large amount of data within a local memory.
Instead, the reorganization can be done by maintaining pointers to the elements of the array. Each
source processor has a table which points to the data blocks to be packed in a communication
step. It is denoted as send data location table D s
. Each entry of D s
is the local block index of
the corresponding entry of D 0
i . Each entry of S, S(i; j), points to the destination processor of the
corresponding entry of D s , D s (i; j). Our scheme computes the schedule and data index set at the
same time.
Through algebraic manipulations, the above procedure gives the following two equations to
directly compute the individual entries of S and D s . In these equations, denotes the
quotient of integer division and the remainder of the integer division.
Similarly,
where n and m are solutions of nK
The proof of correctness of the above mathematical formulations can be found in [25]. The
above formulae for computing the communication schedule and index set for redistribution are
extremely efficient compared with the methods presented in [5], which use a bipartite matching
algorithm. Furthermore, using our formulae, each processor computes only entries which it needs
in its send communication schedule table. Hence, the schedule and index set computation can
be performed in a distributed way and the total cost of computing the schedule and index set is
Q)). The amortized cost to compute a step in the communication schedule and index
set computation is O(1). Our scheme minimizes the number of communication steps and avoids
node contention. In each communication step, equal-sized messages are transferred. Therefore, our
scheme minimizes the total data transfer cost.
3.2 All-to-all communication with different message sizes
The all-to-all communication case arises if G(= G 1 G stated in Lemma 1, where G
Q). From the first superblock, the dpt T is constructed. The dpt is a
Therefore, each column
has more entries than Q destination processors. In each column, several blocks are transferred to
31354131020355241357Destination Processor Table
Send Communication
Schedule Table (S)
Transformed dpt
message
message
Figure
7: Example illustrating an all-to-all case with different message sizes: ! x (4; 3; 6).
the same destination. The column reorganizations as stated in Section 3.1 are applied to the dpt
T, which results in a generalized circulant matrix which is a K 1 \Theta P 1 circulant block matrix. Each
block is a Q 1 \Theta G 1 submatrix which is also a circulant matrix. In the block matrix, the first G 2
blocks in each column are distinct. Blocks in every G th
row have the same entries but different
circular-shifted patterns. These blocks can be folded onto the blocks in their first row. Therefore,
the first G 2 rows in the block matrix only are used in determining a send communication schedule
table S. It is a Q \Theta P generalized circulant matrix. Since blocks in every G th
2 row are folded onto
blocks in their first row, for all-to-all communication case with different message sizes, blocks in
the first (K 1 mod G 2 ) rows of S have size d K 1
e, whole blocks in the remaining rows have size b K 1
c.
Figure
7 shows an example of the send communication schedule table of ! x (4; 3; 6), generated
for all-to-all case with different message sizes. In this example, each processor has more entries
than 6 destination processors. The corresponding dpt is a L s \Theta P matrix, where L
Applying column reorganizations results in a generalized circulant matrix, which can be considered
as a K 1 \Theta P 1 block matrix, where K Each block is a
1. The first G are used as the S table. The 3 rd row is folded onto the
st row. Hence, the message size in the 1 st row is 2 and that in the 2 nd row is 1. If K 1 is a multiple
of G 2 , the message size in every row will be the same. Therefore, the network bandwidth is fully
utilized by sending equal sized messages in each communication step.
3.3 Data transfer cost
In distributed memory model, the communication cost has two parameters which are start-up time
and transfer time. The start-up time, T s , is incurred once for each communication event and is
independent of the message size. Generally, the start-up time consists of the transfer request and
acknowledgment latencies, context switch latency, and latencies for initializing the message header.
The unit transmission time, - d , is the cost of transferring a message of unit length over the network.
The total transmission time for a message is proportional to its size. Thus, the total communication
time for sending a message of size m units from one processor to another is modeled as
In this model, a reorganization of the data elements among the processors, in which each processor
has m units of data for another processor, also takes T s time. This model assumes that
there is no node contention. This is ensured by our communication schedules for redistribution.
Using this distributed memory model, the performance of our algorithm can be analyzed as follows.
Assume that an array with N elements is initially distributed cyclic(x) on P processors and then
redistributed to cyclic(Kx) on Q processors. Using our algorithms, the communication costs for
performing ! x are (i) L s
in the case of non all-to-all communication, and (ii)
in the case of all-to-all communication. The proof of this analysis will be found in [25].
4 Experimental Results
Our experiments were conducted on the IBM SP2. The algorithms were written in C and MPI.
Table
2 shows a comparison of the proposed algorithm with the Caterpillar algorithm[11] and
the bipartite matching scheme[5] with respect to the data transfer cost and schedule and index computation
costs. For the all-to-all communication case with equal-sized messages, the data transfer
cost is the same in each communication step for all three algorithms. Also, the schedule computation
can be performed in a simple way. Hence, it is not considered in Table 2. In Table 2, M
is the size of the array assigned to each source processor
). For the non all-to-all communication
case,
P . Our algorithm as well as the bipartite matching
scheme perform less number of communication steps compared with the Caterpillar algorithm. For
the all-to-all communication case with different message sizes, the messages transmitted in a communication
step are of the same size in the bipartite matching scheme as well as our algorithm.
Therefore, the network bandwidth is fully utilized and the total transmission cost is - d M . In the
Caterpillar algorithm, the transmission cost in a communication step is dominated by the largest
message transferred in that step. Let m i denote the size of the largest message sent in a communication
step i. Note that
. The total start-up cost of all the algorithms is QT s since
the number of communication steps is the same. On the other hand, the total transmission cost of
Table
2: Comparison of data transfer cost and schedule and index computation costs of the Caterpillar
algorithm, bipartite matching scheme and our algorithm.
Note: where, L s < Q for non all-to-all communication case, and m i is the maximum transfered data
size in communication step i.
Non all-to-all communication All-to-all communication
with different message sizes
Data transfer cost Schedule and index
computation cost Data transfer cost Schedule and index
computation cost
Caterpillar algorithm
Bipartite matching
scheme [5]
Our algorithm
O P Q
O Q
the bipartite matching scheme and our algorithm is - d M which is less than that of the Caterpillar
algorithm. The Caterpillar algorithm as well as our algorithm perform the schedule and index
computation in O(Q) time. However, the schedule and index computation cost in the bipartite
matching scheme is O((P
To evaluate the total redistribution cost and the data transfer cost, we consider 3 different
scenarios corresponding to the relative size of P and Q: (Scenario 1) P - Q, (Scenario
and (Scenario In our experiments, we choose
3. The array consisted
of single precision integers. The size of each element is 4 bytes. The array size was chosen to be a
multiple of the size of a superblock to avoid padding using dummy data.
The rest of this section is organized as follows. Subsection 4.1 reports experimental results of the
overall redistribution time of our algorithm and the Caterpillar algorithm. Subsection 4.2 shows
experimental results for the data transfer time of our algorithm and the Caterpillar algorithm.
Subsection 4.3 compares our algorithm and the bipartite matching scheme with respect to the
schedule computation time.
for (j=0; j<n1; j++) {
ts
redistribution routine */
compute schedule and index set
for (i=0; i<n2; i++) {
if (source processor) { /* source processor */
pack message
send message to a destination processor
else { /* destination processor */
receive message from a source processor
unpack message
ts
compute tavg from node_time of each node
compute
Figure
8: Steps for measuring the redistribution time.
4.1 Total redistribution time
In this subsection, we report experimental results for the total redistribution time of our algorithm
and the Caterpillar algorithm. The total redistribution time consists of the schedule computation
time, index computation time, packing/unpacking time, and data transfer time. In our experiments,
the source and the destination processor sets were disjoint. In each communication step, each
sender packs a message before sending it and each receiver unpacks the message after receiving
it. Pack operations in the source processors and unpack operations in the destination processors
were overlapped, i.e., after sending their message in communication step i, senders start to pack
a message for communication in step (i receivers start to unpack the message received in
step i.
Our methodology for measuring the total redistribution time is shown in Figure 8. The time
was measured using the MPI-Wtime() call. n1 is the number of runs. A run is an execution of
redistribution. is the number of communication steps. Each processor measures node-time[j]
in the j th run. Generally, source and destination processors which do not perform an interprocessor
communication in the last step, complete the redistribution earlier than the processors which receive
a message and unpack it. A barrier synchronization, MPI-Barrier(), was performed at the end
of redistribution. After measuring node-time, the average node-time over (P processors is
Total Array Size50.0150.0250.0350.0450.0Total
Redistribution
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
Total Array Size50.0150.0250.0350.0450.0Total
Reditribution
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
Total Array Size50.0150.0250.0350.0450.0Total
Redistribution
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
Total Array Size100.0300.0500.0700.0900.0Total
Redistribution
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
(a) Maximum Time (b) Average Time
(c) Median Time (d) Minimum Time
Figure
9: The maximum, average, median, and minimum total redistribution time for
computed and saved as tavg. The measured value is stored in an array T, as shown in Figure 8.
After the redistribution is performed n1 times, the maximum, minimum, median, and average total
redistribution time are computed over n1 runs. In our experiments, n1 was set to 20.
In
Figure
9, the total redistribution time of our algorithm and the Caterpillar algorithm are
compared on the IBM SP2. In these experiments, 64 nodes were used; 28 were source processors
and 36 were destination processors. The total number of array elements (in single precision) was
varied from 564,480 (2.26 Mbytes) to 14,112,000 (56.4 Mbytes). K was set to 14. Figure 9(a)
shows the maximum time (Tmax in Figure 8). It was observed that there was a large variance in the
measured values. Figure 9(b) shows the results of the average time (Tavg in Figure 8). Figure 9(c)
shows the results using the median time (Tmed in Figure 8). There is still a variance in the measured
values. However, this is smaller than the variance found in the average and the maximum time.
Figure
9(d) shows the minimum time for redistribution (Tmin in Figure 8). This plot is a more
accurate observation of the redistribution time since the minimum time has the smallest component
due to OS interference and other effects related to the environment. In the remaining plots in this
section, we show Tmin only.
The redistribution ! 2 (28; 14; 36) is a non all-to-all communication case. In the non all-to-all
communication case, the messages in each communication step are of the same size. The total
number of communication steps is our algorithm, while it is 36 in the Caterpillar algorithm.
Therefore, the redistribution time of our algorithm is theoretically 50% of that of the Caterpillar
algorithm. In the experimental results shown in Figure 9(d), the redistribution time of our algorithm
is between 51.8% and 55.1% of that of the Caterpillar algorithm.
Figure
shows several experimental results for the non all-to-all communication case. Figure
10(a), (b), and (c) show results for were used. The
number of communication steps using our algorithm is 26, 39, and 52, respectively. The number
of communication steps using the Caterpillar algorithm is 78. Therefore, the redistribution time of
our algorithm can be expected to be reduced by 67%, 50%, and 33% when compared with that of
the Caterpillar algorithm. Out experimental results confirm these. Similar reduction in time were
achieved in the other experimental results shown in Figure 10.
Figure
11 compares the overall redistribution time for the all-to-all communication case with
different message sizes. Figure 11(a) reports the experimental results for ! 4 (28; 6; 36). The array
size was varied from 677,376 (2.71 Mbytes) to 16,934,400 (67,7 Mbytes). For this case, both
Total
Redistribution
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
Total Array Size50.0150.0250.0350.0450.0
Total
Redistribution
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
Total Array Size50.0150.0250.0350.0
Total
Redistribution
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
Total Array Size50.0150.0250.0350.0
Total
Redistribution
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
Total Array Size50.0150.0250.0Total
Redistribution
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
Total Array Size50.0150.0250.0
Total
Redistribution
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
(a) -
(c) -
Figure
10: Total redistribution time for non all-to-all communication cases.
Total
Redistribution
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
Total Array Size50.0150.0250.0Total
Redistribution
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
Total Array Size50.0150.0250.0
Total
Redistribution
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
Total Array Size50.0150.0250.0
Total
Redistribution
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
(c) -
Figure
11: Total redistribution time for all-to-all communication cases with different message sizes.
for (j=0; j<n1; j++) {
redistribution routine */
compute schedule and index set
for (i=0; i<n2; i++) {
if (source processor) { /* source processor */
pack message
ts
send message to a destination processor
ts
else { /* destination processor */
ts
receive message from a source processor
ts
unpack message
compute tavg from node_tr of each node
compute
Figure
12: Steps in measuring the data transfer time.
algorithms have the same number of steps (36). Within a superblock, half the messages are two
blocks while the other half are one block. In our algorithm, equal-sized messages are transferred
in each communication step. Therefore, during half the steps, two block messages are sent while
during the other half one block messages are sent. The Caterpillar algorithm does not attempt to
schedule the communication operations to send equal-sized messages. Therefore, the redistribution
time in each step is determined by the time to transfer the largest message. Theoretically, the
total redistribution time of our algorithm is reduced by 25% compared with that of the Caterpillar
algorithm. In our experiments, we achieved up to 17.9% reduction in redistribution time. When the
array size is small, both algorithms have approximately the same performance since the start-up
cost dominates the overall data transfer cost. As the array size is increased, the reduction in the
time to perform the distribution using our algorithm improves. For other scenarios, we obtained
similar results (See Figure 11(b), (c), and (d)).
4.2 Data transfer time
In this subsection, we report the experimental results of the data transfer time of our algorithm and
the Caterpillar algorithm. The experiments were performed in the same manner as discussed in
Subsection 4.1. The data sets used in these experiments are the same as those used in the previous
subsection. The data transfer time of each communication step is first measured. Then the total
data transfer time is computed by summing up the measured time for all the communication steps.
The methodology for measuring the time is shown in Figure 12.
In
Figure
13, the data transfer time of our algorithm and that of the Caterpillar algorithm are
reported. The experiments were performed on the IBM SP2. Figure 13(a) reports the maximum
data transfer time (Tmax in Figure 12). A large variation in the measured values was observed.
Figure
13(b) and (c) show the average time (Tavg in Figure 12) and the median time (Tmed in
Figure
12) of the data transfer time, respectively. These values are computed using the maximum
time(Tmax). Figure 13(d) shows the minimum data transfer time(Tmin). This plot is a more
accurate observation of the data transfer time since the minimum time has the smallest component
due to OS interference and other effects related to the environment. Therefore, it is a more accurate
comparison of the relative performance of the redistribution algorithms. In the remainder of this
section, we show plots corresponding to Tmin only.
The redistribution ! 2 (28; 14; 36) is a non all-to-all communication case. The messages in each
communication step are of the same size. The total number of communication steps is using our
algorithm, where as the total number of steps is 36 using the Caterpillar algorithm. Therefore, the
data transfer time of our algorithm is theoretically 50% of that of the Caterpillar algorithm. In the
experimental results (see Figure 13(d)), the redistribution time of our algorithm is between 49.2%
and 53.7% of that of the Caterpillar algorithm. Figure 14 shows several experimental results for the
non all-to-all communication case. Similar reductions in time were achieved in these experiments.
Figure
15 reports the experimental results for the all-to-all communication case with different
message sizes. The data transfer time in the all-to-all communication case is sensitive to net-work
contention since every source processor communicates with every destination processor. For
algorithms have the same number of steps (36). Within a superblock, half the
messages are two blocks while the other half are one block. In our algorithm, equal-sized messages
are transferred in each communication step. Therefore, during half the steps, two block messages
are sent while one block messages are sent during the other half. The Caterpillar algorithm does
not attempt to send equal-sized messages in each communication step. Therefore, the data transfer
time in each step is determined by the time to transfer the largest message. Theoretically, the
data transfer time of our algorithm is reduced by 25% when compared with that of the Caterpillar
250.0350.0Data
Transfer
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
Total Array Size50.0Data
Transfer
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
Total Array Size50.0Data
Transfer
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
Total Array Size50.0
Data
Transfer
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
(a) Maximum Time (b) Average Time
(c) Median Time (d) Minimum Time
Figure
13: Maximum, average, median, and minimum data transfer times for
Data
Transfer
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
Total Array Size50.0150.0
Data
Transfer
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
Total Array Size50.0150.0
Data
Transfer
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
Total Array Size50.0150.0
Data
Transfer
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
Total Array Size50.0Data
Transfer
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
Total Array Size50.0
Data
Transfer
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
(a) -
(c) -
Figure
14: Data transfer time for non all-to-all communication cases.
Data
Transfer
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
Total Array Size50.0Data
Transfer
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
Total Array Size50.0Data
Transfer
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
Total Array Size50.0Data
Transfer
Time
(msec)
on the IBM SP2
Our algorithm
Caterpillar algorithm
(c) -
Figure
15: Data transfer time for all-to-all communication cases with different message sizes.
Table
3: Comparison of schedule computation time (-secs): The procedure in [23] was used for
bipartite matching.
P and Q K Our Scheme Bipartite Matching Scheme [5]
48 276.587 207762.816
algorithm. In experiments with large message sizes, we achieved up to 15.5% reduction. With
small messages, both algorithms have approximately the same performance since the start-up time
dominates the data transfer time. Other experimental results are reported in Figure 15(b), (c),
and (d).
4.3 Schedule computation time
The time for computing the schedule in the Caterpillar algorithm as well as in our algorithm is
negligible compared with the total redistribution time. Even though the schedule in the Caterpillar
algorithm is simpler than ours, the Caterpillar algorithm needs time for index computation to
identify the blocks to be packed in a communication step. This time is approximately the same as
our schedule computation time.
The schedule computation time of the bipartite matching scheme [5] is much higher than that of
the Caterpillar algorithm and our algorithm. It is in the range of hundreds of msecs which is quite
significant. The schedule computation cost of the bipartite matching scheme increases rapidly as
the number of processors increases. On the other hand, our algorithm computes the communication
schedule efficiently. Each processor computes its entries in the send communication schedule table.
Thus, the schedule is computed in a distributed way. The schedule computation time is in the
range of 100's of -secs. The comparison between our scheme and the bipartite matching scheme
with respect to the schedule computation time is shown in Table 3. Here, the time for our scheme
includes the index computation time. For the bipartite matching scheme, the time shown is the
schedule computation time only.
Conclusions
In this paper, we showed an efficient algorithm for performing redistribution from cyclic(x) on P
processors to cyclic(Kx) on Q processors. The proposed algorithm was represented using generalized
circulant matrix formalism. Our algorithm minimizes the number of communication steps and
avoids destination node contention in each communication step. The network bandwidth is fully
utilized by ensuring that messages of the same size are transferred in each communication step.
Therefore, the total data transfer cost is minimized.
The schedule and index computation costs are also important in performing run-time redistri-
bution. In our algorithm, the schedule and the index sets are computed in O(max(P; Q)) time.
This computation is extremely fast compared with the bipartite matching scheme in [5] which
takes O((P Our schedule and index computation times are small enough to be negligible
compared with the data transfer time making our algorithms suitable for run-time data
redistribution.
Acknowledgment
We would like to thank the staff at MHPCC for their assistance in evaluating our algorithms on
the IBM SP-2. We also would like to thank Manash Kirtania for his assistance in preparing this
manuscript.
--R
Redistribution of Block-Cyclic Data Distributions Using MPI
Processor Mapping Techniques Toward Efficient Data Redistribution.
Scheduling Block-Cyclic Array Redistribution
Parallel Matrix Transpose Algorithms on Distributed Memory Concurrent Computers.
Parallel Implementation of Synthetic Aperture Radar on High Performance Computing Platforms.
Fast Runtime Block Cyclic Data Redistribution on Multipro- cessors
Message Passing Interface Forum.
Runtime Array Redistribution in HPF Programs.
Efficient Algorithms for Array Redistribution.
Automatic Generation of Efficient Array Redistribution Routines for Distributed Memory Multicomputers.
Compilation Techniques for Block-Cyclic Distributions
Multiphase Array Redis- tribution: Modeling and Evaluation
Multiphase Array Redis- tribution: Modeling and Evaluation
An Approach to Communication Efficient Data Redistribution.
Communication Issues in Heterogeneous Embedded Systems.
A Basic-Cycle Calculation Technique for Efficient Dynamic Data Redistribution
Scalable Portable Implementations of Space-Time Adaptive Processing
Efficient Algorithms for Block-Cyclic Redistribution of an Array
A Shortest Augmenting Path Algorithm for Dense and Sparse Linear Assignment Problems.
--TR
--CTR
Stavros Souravlas , Manos Roumeliotis, A pipeline technique for dynamic data transfer on a multiprocessor grid, International Journal of Parallel Programming, v.32 n.5, p.361-388, October 2004
Wang , Minyi Guo , Daming Wei, A Divide-and-Conquer Algorithm for Irregular Redistribution in Parallelizing Compilers, The Journal of Supercomputing, v.29 n.2, p.157-170, August 2004
Jih-Woei Huang , Chih-Ping Chu, An Efficient Communication Scheduling Method for the Processor Mapping Technique Applied Data Redistribution, The Journal of Supercomputing, v.37 n.3, p.297-318, September 2006
Ian N. Dunn , Gerard G. L. Meyer, QR factorization for shared memory and message passing, Parallel Computing, v.28 n.11, p.1507-1530, November 2002
Ching-Hsien Hsu , Shih-Chang Chen , Chao-Yang Lan, Scheduling contention-free irregular redistributions in parallelizing compilers, The Journal of Supercomputing, v.40 n.3, p.229-247, June 2007
Ching-Hsien Hsu, Sparse Matrix Block-Cyclic Realignment on Distributed Memory Machines, The Journal of Supercomputing, v.33 n.3, p.175-196, September 2005
Emmanuel Jeannot , Frdric Wagner, Scheduling Messages For Data Redistribution: An Experimental Study, International Journal of High Performance Computing Applications, v.20 n.4, p.443-454, November 2006
Minyi Guo , Yi Pan, Improving communication scheduling for array redistribution, Journal of Parallel and Distributed Computing, v.65 n.5, p.553-563, May 2005
Ching-Hsien Hsu , Kun-Ming Yu, A Compressed Diagonals Remapping Technique for Dynamic Data Redistribution on Banded Sparse Matrix, The Journal of Supercomputing, v.29 n.2, p.125-143, August 2004
Minyi Guo , Ikuo Nakata, A Framework for Efficient Data Redistribution on Distributed Memory Multicomputers, The Journal of Supercomputing, v.20 n.3, p.243-265, November 2001 | block-cyclic distribution;interprocessor communication;redistribution algorithms |
331220 | Tight Bounds for Prefetching and Buffer Management Algorithms for Parallel I/O Systems. | AbstractThe I/O performance of applications in multiple-disk systems can be improved by overlapping disk accesses. This requires the use of appropriate prefetching and buffer management algorithms that ensure the most useful blocks are accessed and retained in the buffer. In this paper, we answer several fundamental questions on prefetching and buffer management for distributed-buffer parallel I/O systems. First, we derive and prove the optimality of an algorithm, P-min, that minimizes the number of parallel I/Os. Second, we analyze P-con, an algorithm that always matches its replacement decisions with those of the well-known demand-paged MIN algorithm. We show that P-con can become fully sequential in the worst case. Third, we investigate the behavior of on-line algorithms for multiple-disk prefetching and buffer management. We define and analyze P-lru, a parallel version of the traditional LRU buffer management algorithm. Unexpectedly, we find that the competitive ratio of P-lru is independent of the number of disks. Finally, we present the practical performance of these algorithms on randomly generated reference strings. These results confirm the conclusions derived from the analysis on worst case inputs. | Introduction
The increasing imbalance between the speeds of processors and I/O devices has
resulted in the I/O subsystem becoming a bottleneck in many applications. The
use of multiple disks to build a parallel I/O subsystem has been advocated to increase
I/O performance and system availability [5], and most high-performance
systems incorporate some form of I/O parallelism. Performance is improved by
overlapping accesses at several disks using judicious prefetching and buffer management
algorithms that ensure that the most useful blocks are accessed and
retained in the buffer.
A parallel I/O system consists of D independent disks, each with its own disk
buffer, that can be accessed in parallel. The data for the computation is spread
out among the disks in units of blocks. A block is the unit of retrieval from a
disk. The computation is characterized by a computation sequence, which is the
ordered sequence of blocks that it references. In our model all accesses are read-
only. Prefetching (reading a data block before it is needed by the computation)
Research partially supported by a grant from the Schlumberger Foundation
?? Research partially supported by NSF grant CCR-9303011
is a natural mechanism to increase I/O parallelism. When the computation demands
a disk-resident block of data, concurrently a data block can be prefetched
from each of the other disks in parallel, and held in buffer until needed. This
requires discarding a block in the buffer to make space for the prefetched block.
Some natural questions that arise are: under what conditions is it worthwhile to
discard a buffer-resident block to make room for a prefetch block which will be
used only some time later in the future? And, if we do decide to discard a block,
what replacement policy should be used in choosing the block to be replaced.
In this paper we answer several fundamental questions on prefetching and
buffer management for such parallel I/O systems. The questions we address are:
what is an optimal prefetch and buffer management algorithm, and how good
are the algorithms proposed earlier for sequential (single) disk systems in this
context. We obtain several interesting results, which are informally stated below
and more precisely stated in Section 2. We find and prove the optimality of an
algorithm, P-MIN, that minimizes the number of parallel I/Os. This contrasts
with the recent results on prefetching to obtain CPU-disk overlap [4], where
no efficient algorithm to find the optimal policy is known. Secondly, we show
that P-CON, an algorithm that attempts to optimize the number of I/Os on
each disk, can have very poor parallel performance. Finally we investigate the
behavior of semi-on-line algorithms using parallel I/O. The concept of semi-online
algorithms that we consider in this paper captures the dual requirements
of prefetching (which needs some future knowledge) and on-line behavior (no
future knowledge). We define and analyze P-LRU, a semi-on-line version of the
traditional Least Recently Used (LRU) buffer-management algorithm. We find
the performance of P-LRU is independent of the number of disks, in contrast
to P-CON where the performance can degrade in proportion to the number of
disks.
In contrast to single-disk systems (sequential I/O) for which these issues have
been studied extensively (e.g. [2, 6]), there has been no formal study of these
issues in the parallel I/O context. In the sequential setting the number of block
(or I/Os) is a useful performance metric; scaling by the average block
access time provides an estimate of the I/O time. In contrast, in the multiple-
disk case there is no direct relationship between the number of I/Os and the I/O
time, since this depends on the I/O parallelism that is attained. The goals of
minimizing the number of I/Os done by each disk and minimizing the parallel
I/O time can conflict. Traditional buffer management algorithms for single-disk
systems have generally focused on minimizing the number of I/Os. In the parallel
context it may be useful to perform a greater than the absolute minimal (if the
disk were operated in isolation) number of I/Os from each disk, if this allows a
large number of them to be overlapped.
The rest of the paper is organized as follows. Section 1.1 summarizes related
work. Section 2 develops the formal model and summarizes the main results. In
Section 3.1 we derive a tight upper bound for P-CON algorithm. In Section 3.2
we prove the optimality of P-MIN. Section 3.3 analyzes the performance of the
semi-on-line algorithm P-LRU.
1.1 Related Work
In single-disk systems, buffer management (or paging problem) algorithms were
studied [2, 6, 11], and several policies (LRU, FIFO, Longest Forward Distance,
etc.) were proposed and analyzed. The Longest Forward Distance [2] policy minimizes
the number of page faults, and is therefore called the MIN algorithm. All
these policies use demand I/O and deterministic replacement, i.e. they fetch
only when the block being referenced is not in the buffer, and the choice of
the replaced block is deterministic. (Randomized replacement algorithms, e.g.
see [8], are beyond the the scope of this paper.) In the sequential case, it is well
known [11] that prefetching does not reduce the number of I/Os required.
Sleator and Tarjan [11] analyzed the competitive ratio of on-line paging algorithms
relative to the off-line optimal algorithm MIN. They showed that LRU's
performance penalty can be proportional to the size of fast memory, but no other
on-line algorithm can, in the worst case, do much better. These fundamental results
have been extended in several ways, most often to include models that allow
different forms of lookahead [3, 1, 9, 7]. All these works deal with the question of
which buffer block to evict. In contrast, in our situation the additional question
that arises is when to fetch a block and evict some other.
Cao et al. [4] examined prefetching from a single disk to overlap CPU and I/O
operations. They defined two off-line policies called aggressive and conservative,
and obtained bounds on the elapsed time relative to the optimal algorithm.
We use prefetching to obtain I/O parallelism with multiple disks, and use the
number of parallel I/Os (elapsed I/O time) as the cost function. The P-MIN
and P-CON algorithms analyzed here generalize the aggressive and conservative
policies respectively. However, while aggressive is suboptimal in the model of [4],
P-MIN is proved to be the optimal algorithm in our model. The prefetching
algorithm for multiple disks analyzed in [10] assumed a global buffer and read-once
random data.
We also investigate semi-on-line algorithms using parallel I/O. Since prefetching
involves reading blocks that are required in the future (relative to where the
computation has progressed), this presents a natural situation where lookahead
is necessary. This inspires us to define a lookahead version of LRU, P-LRU, in
which the minimum possible lookahead of one block beyond those currently in
the buffer is known for each disk. 3 We find the performance of P-LRU is independent
of the number of disks, in contrast to P-CON whose performance can
degrade in proportional to the number of disks.
Preliminaries
The computation references the blocks on the disks in an order specified by the
consumption sequence, \Sigma . When a block is referenced the buffer for that disk is
checked; if the block is present in the buffer it is consumed by the computation,
Recently, Breslauer [7] arrived at this lookahead definition independently in a sequential
demand context.
which then proceeds to reference the next block in \Sigma . If the referenced block is
not present in the disk buffer, then an I/O (known as a demand I/O) for the
missing block is initiated from that disk. If only demand I/Os were initiated,
then the other disks in the system would idle while this block was being fetched.
However, every demand I/O at a disk provides a prefetch opportunity at the
other disks, which may be used to read blocks that will be referenced in the near
future. For example, consider a 2-disk system holding blocks (a 1 ; a 2 ) and (b
on disks 1 and 2 respectively. If strictly demand I/O
would require four non-overlapped I/Os to fetch the blocks. A better strategy
is to overlap reads using prefetching. During the demand I/O of block a 1 , the
second disk could concurrently prefetch b 1 ; after a 1 and b 1 have been consumed,
a demand I/O for block b 2 will be made concurrently with a prefetch of block
a 2 . The number of parallel I/Os in this case is now two.
While prefetching can increase the I/O parallelism, the problem is complicated
by the finite buffer sizes. For every block read from a disk some previously
fetched block in the corresponding buffer must be replaced. For prefetch blocks
the replacement decision is being made earlier than is absolutely necessary, since
the computation can continue without the prefetched block. These early replacement
choices can be much poorer than replacement choices made later, since
as the computation proceeds, other, more useful replacement candidates may
become available. Of course, once a block becomes a demand block then the
replacement cannot be deferred. A poor replacement results in a greater number
of I/Os as these prematurely discarded discarded blocks may have to be fetched
repeatedly into the buffer. Thus there is a tradeoff between the I/O parallelism
that can be achieved (by using prefetching), and the increase in the number of
I/Os required (due to poorer replacement choices).
2.1 Definitions
The consumption sequence \Sigma is the order in which blocks are requested by the
computation. The subsequence of \Sigma consisting of blocks from disk i will be
denoted by \Sigma i . Computation occurs in rounds with each round consisting of an
I/O phase followed by a computation phase. In the I/O phase a parallel I/O is
initiated and some number of blocks, at most one from any disk, are selected
to be read. For each selected disk, a block in the corresponding disk buffer is
chosen for replacement. When all new blocks have been read from the disks,
the computation phase begins. The CPU consumes zero or more blocks that are
present in the buffer in the order specified by \Sigma . If at any point the next block of
\Sigma is not present in a buffer, then the round ends and the next round begins. The
block whose absence forced the I/O is known as a demand block; blocks that are
fetched together with the demand block are known as prefetch blocks. An I/O
phase may also be initiated before the computation requires a demand block. In
this case all the blocks fetched in are prefetch blocks. We will often refer to the
I/O phase of a round as an I/O time step.
An I/O schedule with makespan T , is a sequence hF
where F k is the set of blocks (at most one from each disk) fetched by the parallel
I/O at time step k. The makespan of a schedule is the number of I/O time steps
required to complete the computation.
valid schedule is one in which axioms A1 and A2 are satisfied.
block must be present in the buffer before it can be consumed.
- A2: There are at most M , where M is the buffer size, blocks in any disk buffer
at any time.
- An optimal schedule is a valid schedule with minimal makespan among all valid
schedules.
- A normal schedule is a valid schedule in which each F k , 1 k T , contains
a demand block.
- A sequential schedule is a valid schedule in which the blocks from each disk i
are fetched in the order of \Sigma i .
At the start of a round, let U i denote the next referenced block of \Sigma i that is not
currently in the buffer of disk i. Define a min-block of disk i, to be the block in
disk i's buffer with the longest forward distance to the next reference.
is a normal, sequential
schedule in which at I/O step k, U i 2 F k unless all blocks in disk i's buffer are
referenced before U i . If U i 2 F k , then replace the min-block of disk i with U i .
is a normal, sequential
schedule in which at every I/O step k, U i 2 F k provided the min-block of disk i
now is the same as the min-block if U i were fetched on demand. If U
replace the min-block of disk i with U i .
is a normal, sequential
schedule in which at every I/O step k, U i 2 F k unless all blocks in disk i's buffer
are referenced before U i . If U i 2 F k then from among the blocks in the buffer
whose next reference is not before that of U i choose the least recently used block
and replace it with U i .
Notice that all three schedules defined above are normal. That is in every I/O
step, one disk is performing a demand fetch and the rest are either performing
a prefetch or are idle. Of these P-MIN and P-LRU are greedy strategies, and
will almost always attempt to prefetch the next unread block from a disk. The
only situation under which a disk will idle is if every block in the buffer will be
referenced before the block to be fetched. Note that this greedy prefetching may
require making "suboptimal" replacement choices, that can result in an increase
the number of I/Os done by that disk. We show, however, that P-MIN policy
has the minimal I/O time, and is therefore optimal.
An example with presented below. Let the blocks on
disk 1 (2) be a
Round Disk 1Disk 2 CPU
P-MIN Schedule
Round Disk 1Disk 2 CPU
P-CON Schedule
Round Disk 1Disk 2 CPU
9 \Gamma=\Gamma b3=b4 b3 ;
P-LRU Schedule
Fig. 1. Examples of I/O Schedules
Figure
1 shows the I/O schedule using different policies for the example se-
quence. The entries in the second and third columns indicate the blocks that are
fetched and replaced from that disk at that round. Bold and italic faced blocks
indicate a demand block, and a prefetch block respectively.
In contrast to P-MIN, the conservative strategy [4] P-CON, is pessimistic.
It does not perform a prefetch unless it can replace the "best" block, so that
the number of I/Os done by any disk is the smallest possible. However, while
minimizing the number of I/Os done by a disk, it may result in serialization of
these accesses, and perform significantly worse than the optimal algorithm. Note
that in Figure 1 at step 4, no block is fetched from disk 2 by P-CON. This is
because the only candidate for replacement at this time (the current min-block)
is block b 1 ; however, if b 4 were a demand block, the min-block would be b 3 .
To take advantage of a prefetch opportunity, any algorithm must know which
is the next unread block in \Sigma i . That is, it requires to have a lookahead upto at
least one block beyond those currently in its buffer. The replacement decision
made by P-LRU is based solely by examining the current blocks in the buffer,
and tracking which of them will be referenced before the next unread block.
From those whose next reference is not before that of the next unread block, the
least recently consumed block is chosen as the replacement candidate. If P-LRU
is applied to the sequence \Sigma used for P-MIN and P-CON, a schedule of 12 I/O
steps will be obtained (see Fig. 1). In the next section, we quantify precisely the
performance of these three algorithms.
2.2 Summary of Results
Let TP\GammaMIN , TP\GammaCON and TP\GammaLRU be the number of I/O time steps required by
P-MIN, P-CON and P-LRU respectively. Let T opt be the number of I/O steps
required by an optimal schedule. Let N denote the length of \Sigma ; also recall that
D is the number of disks and M the size of each disk buffer in blocks. The
technical results of the paper are as follows.
1. The worst-case ratio between the makespan of a P-CON schedule and the
corresponding optimal schedule is bounded by D. That is, at worst P-CON
can serialize all disk accesses without increasing the number of I/Os performed
by any disk (see Theorem 7).
2. The worst-case bound of P-CON stated above is tight. That is, there are
consumption sequences, for which P-CON completely serializes its accesses
(see Theorem 8).
3. P-MIN is an optimal schedule. That is it minimizes the number of parallel
time steps over all other valid schedules (see Theorem 11.)
4. The worst-case ratio between the makespan of a P-LRU schedule and the
corresponding optimal schedule is bounded by M . That is, at worst P-LRU
can inflate the number of I/Os performed by any disk to that done by a
serial LRU algorithm (see Theorem 13).
5. The worst-case bound of P-LRU stated above is tight. That is, there are
consumption sequences, for which P-LRU does inflate the accesses for each
disk by a factor of M (see Theorem 14).
3 Detailed Results
3.1 Bounds for P-CON
We begin with a simple upper bound for TP\GammaCON . Let TMIN denote the maximum
number of I/Os done by the sequential MIN algorithm to a single disk.
Theorem 7. For any consumption sequence, TP\GammaCON D T opt .
Proof. We show that TP\GammaCON DTMIN DT opt . The I/Os made by P-CON to
disk i for consumption sequence \Sigma , are exactly the I/Os done by the sequential
MIN algorithm to disk i for sequence \Sigma i . Hence the number of I/Os performed
by any disk in P-CON is bounded by TMIN . At worst, none of the accesses of
any of the disks can be overlapped whence the first inequality follows. Finally,
the second inequality follows since the optimal parallel time for D disks cannot
be smaller than the minimal number of I/Os for a single disk. ut
Theorem 8. The bound of Theorem 7 is tight.
Proof (sketch). Construct the following four length-M sequences, where B i (j) is
the j th block on disk i. Aslo define \Sigma as follows, where (u) N means N repetitions
of the parenthesized sequence.
It can be argued that for \Sigma ,
1, and that the P-MIN schedule has length 2NM
1. Thus, TP\GammaCON =T opt is lower bounded by
3.2 Optimality of P-MIN
In this section we show that P-MIN requires the minimal number of parallel I/O
steps among all valid schedules. For the proof, we show how to transform an
optimal schedule OPT with makespan L, into a P-MIN schedule with the same
makespan.
Schedules ff and fi are said to match for time steps [T ], if for
every t, t 2 the blocks fetched and replaced from each disk in the two
sequences are the same.
Lemma 10. Assume that ff is a valid schedule of length W . Let fl be another
schedule which matches ff for [T \Gamma 1]. After the I/O at time step T , the buffers
of ff and fl for some disk i differ in one block: specifically ff has block V but not
block U , and fl has block U but not block V . Assume that V is referenced after
U in the consumption sequence following the references at time T \Gamma 1. We can
construct a valid schedule fi of length W , such that fi and ff match for
and fi and fl match at time step T .
Proof. Let be the first time step after T that ff fetches or discards
either block V or U . It can either discard block V , or fetch block U , or do both,
at Construct schedule fi as follows: fi matches ff for time steps [W ] except
at time steps T and
At T , fi fetches and replaces the same blocks as fl. At one of the
following must occur:
- ff fetches a block Z 6= U and discards block then in the construction fi
will also fetch block Z, but will discard block U .
- ff fetches block U and discards block Z, Z
and will also discard block Z.
- ff fetches block U and discards block does not fetch or discard
any block.
In all three cases above, following the I/O at will have the
same blocks in the buffer. Since fi fetches and replaces the same blocks as ff for
the buffers of ff and fi will be the same for all time
steps after the I/O at
At each time step t, 1 t W , fi will consume the same blocks as done
by ff at t. Clearly, fi satisfies axiom A2. We show that fi is a valid schedule by
showing that axiom A1 is satisfied for all blocks consumed by fi.
Since ff and fi have the same buffer before the I/O at T and after the I/O at
the blocks consumed by ff at any time step t, t
also be consumed by fi at the same time step.
be the first time step after the I/O at T that either block U
or V is consumed by ff. Since ff does not have U in buffer till at least after the
I/O at and by the hypothesis, V is consumed after U , T Hence
only blocks, X 6= U; V can be consumed by ff at time steps t, T t
Since the buffers of ff and fi agree except on fU; V g, X can also be consumed
by fi at the same time step. Since ff is a valid schedule, all consumptions of fi
also satisfy axiom A1. Hence, fi is a valid schedule. ut
Theorem11. P-MIN is an optimal schedule.
Proof. Let \Delta
and\Omega denote the schedules created by P-MIN and OPT algorithms
respectively. We successively
transform\Omega into another valid schedule
that matches \Delta and has the same length
as\Omega . This will show that the P-MIN
schedule is optimal.
The proof is by induction. For the Induction Hypothesis assume that at time
step
t,\Omega has been transformed to a valid
schedule\Omega t which matches \Delta at time
steps [t]. We show how to
to\Omega t+1 below.
We discuss the transformation for an arbitrary disk at time step t + 1. The
same construction is applied to each disk independently. If \Delta
and\Omega t match at
let\Omega t+1 be the same
as\Omega t . Suppose \Delta
and\Omega t differ at time step
one of the following three cases must occur at 1: We consider
each case separately.
but\Omega t does not fetch any block: Let \Delta fetch
block P and discard block Q at t + 1. Since \Delta always fetches blocks in the
order in which they are referenced, P will be referenced before Q. From the
Induction Hypothesis, \Delta
and\Omega t have the same buffer at the start of time
Hence after the I/O at t
and\Omega t differ in one block: \Delta has
block P but not block Q,
while\Omega t has Q but not P .
Using Lemma 10 with ff
we can construct valid
schedule\Omega t+1 that
matches\Omega t at time steps [t] and
\Delta at time t + 1. Hence, the Induction Hypothesis is satisfied for t + 1.
-\Omega t fetches a block but \Delta does not fetch any block: Since \Delta does
not fetch any block at time step t + 1, every block in the buffer at the start
of time step t will be consumed before any block not currently in the
buffer is referenced.
and\Omega t have the same buffer at the start of time step t
brings in a fresh block (P ) at must discard some block (Q). Since
\Delta chose to retain block Q in preference to fetching block P , then either Q
must be referenced before P , or neither P nor Q will be referenced again.
In the first case, using Lemma 10 with ff
1, we can
construct\Omega t+1 , a schedule that satisfies the Induction
Hypothesis for t + 1.
In the second
case,\Omega t+1 is the same
as\Omega t , except that at time step t
\Omega t+1 does not fetch any block. Since, the buffers
and\Omega t+1 agree on
all blocks except P and Q, and these two blocks are never referenced again,
all blocks consumed
by\Omega t at a time step can also be consumed
by\Omega t+1 at
that time.
and\Omega t fetch different blocks: Suppose that \Delta fetches block P
and discards block Q at t
and\Omega t fetches block Y and discards block Z
at t + 1. Assume that Q 6= Z, since otherwise the buffers
of\Omega t and \Delta differ
in just the pair of blocks fP; Y g, and we can easily
construct\Omega t+1 as before
by using Lemma 10 with ff
By the Induction Hypothesis, \Delta
and\Omega t have the same buffer at the start of
time step t + 1. Hence after the I/O at t
and\Omega t differ in two blocks;
specifically,
is the set of blocks in the buffer of schedule \Theta.
be the first time after
fetches or replaces
a block W 2 fP; Q; Y; Zg. It can either discard block Q or Y , or fetch
block P or Z, or some appropriate combination of these (see cases below),
at Construct
t+1 as
matches\Omega t at all time
steps
fetches P and discards Q,
following the actions of \Delta at this time step. Hence after the I/O at
Zg.
one of the following will occur:
ffl\Omega t fetches block
discards Q:
t+1 also fetches S,
but discards Z. After the I/O at
ffl\Omega t fetches P and discards Q:
fetches Y and discards Z. After
the I/O at
buffer(\Omega t ).
ffl\Omega t fetches Z and discards Q:
does nothing at this step. After
the I/O at
ffl\Omega t fetches
t+1 also fetches S, but
discards P . After the I/O at t+ ffi ,
fZg.
t+1 does not fetch any block at this
time step. After the I/O at
fZg.
ffl\Omega t fetches Z and discards Y :
fetches Q and discards P . After
the I/O at
buffer(\Omega t ).
ffl\Omega t fetches P and discards block
fetches Y and
discards block S. After the I/O at
ffl\Omega t fetches Z and discards block
fetches Q and
discards block S. After the I/O at
Consider the consumptions made
by\Omega t at time steps T , t+1 T t+
Notice that in the consumption sequence P must precede both Q and Y ,
and Z must precede Q. The constraints on P follow since \Delta fetches P and
discards Q, and fetches P in preference to Y . The constraint on Z follows
since \Delta discards Q rather than Z.
We now show how to
to\Omega t+1 . If the buffers
are the same after the I/O at
t+1 . Otherwise, the
buffers must differ in either the pair of blocks fQ; Zg or fY; Pg.
We will
construct\Omega t+1 by concatenating the prefix
steps 1 and t schedule fi that will be constructed using
Lemma 10, as described below.
Let ff and fl be, respectively, the schedules consisting of the suffixes
t+1 for time steps greater than or equal to t . If at the end of the
I/O at the buffers
t+1 differ in fY; Pg, then let
they differ in fQ; Zg, then let
Applying Lemma 10 with construct the desired sequence fi.
\Omega t+1 is obtained by concatenating the prefix
with fi.
The consumptions of blocks
in\Omega t+1 are as follows: for time steps T , 1
the consumptions are those
consumptions are determined by fi. All consumptions from 1 till t are valid
since\Omega t is a valid schedule,
and\Omega t+1
and\Omega t match for [t]. By construction
blocks consumed after are valid. We need to show
can consume the same blocks
as\Omega t at time steps T ,
does not have P or Z in buffer at the end of the I/O at t + 1, it can
consume P or Z only after the I/O at time t later. Also, since Q and
Y must be consumed after P , none of the blocks can be consumed
before the I/O at after the I/O at t + 1, the buffers
and\Omega t+1 agree except on fP; Q; Y; Zg, all blocks consumed
can also be consumed
by\Omega t+1 at that time step.
This concludes the proof. ut
3.3 Bounds for P-LRU
We now obtain an upper bound on the worst-case performance of P-LRU, and
show that this bound is tight. We use the following lemma whose proof is omitted
for brevity.
Lemma 12. Let S be a contiguous subsequence of \Sigma which references M or less
distinct blocks from some disk i. Then in consuming S, none of these blocks will
be fetched more than once by P-LRU.
Theorem13. For all consumption sequences, TP\GammaLRU MT opt .
Proof. Inductively assume that consumptions made in the first t steps by P-MIN
can be done in M t or less steps by P-LRU. (This holds for be the
set of references made by P-MIN at , since at most
distinct blocks can be consumed from any disk at a time step of P-MIN.
Since P-LRU will fetch a block of U i at most once (Lemma 12), all P-MIN's
consumptions at t + 1 can be done in at most an additional M steps. ut
Theorem 14. The worst-case bound of Theorem 13 is tight.
Proof (sketch). We show the construction of \Sigma for two disks; a i and b i are blocks
from disks 1 and 2 respectively. Note that after the first M accesses of \Sigma (com-
mon to both P-LRU and P-MIN), P-LRU makes M accesses for every access of
P-MIN.
In this paper we defined a model for parallel I/O systems, and answered several
fundamental questions on prefetching and buffer management for such systems.
We found and proved the optimality of an algorithm, P-MIN, that minimizes
the number of parallel I/Os (while possibly increasing the number of I/OS done
by a single disk). In contrast, P-CON, an algorithm which always matches its
replacement decisions with with those of the well-known single-disk optimal al-
gorithm, MIN, can become fully serialized in the worst case. The behavior of
an on-line algorithm with lookahead, P-LRU was analyzed. The performance of
P-LRU is independent of the number of disks. Similar results can be shown to
hold for P-FIFO, a parallel version of FIFO with lookahead.
--R
The influence of lookahead in competitive paging algorithms.
A Study of Replacement Algorithms for Virtual Storage.
A New Measure for the Study of On-Line Algo- rithms
A Study of Integrated Prefetching and Caching Strategies.
Operating Systems Theory.
On competitive on-line paging with lookahead
Competitive Paging Algorithms.
Beyond competitive analysis.
Markov Analysis of Multiple-Disk Prefetching Strategies for External Merging
Amortized Efficiency of List Update and Paging Rules.
--TR
--CTR
Mahesh Kallahalla , Peter J. Varman, Analysis of simple randomized buffer management for parallel I/O, Information Processing Letters, v.90 n.1, p.47-52, 15 April 2004
Mahesh Kallahalla , Peter J. Varman, Optimal prefetching and caching for parallel I/O sytems, Proceedings of the thirteenth annual ACM symposium on Parallel algorithms and architectures, p.219-228, July 2001, Crete Island, Greece
Mahesh Kallahalla , Peter J. Varman, PC-OPT: Optimal Offline Prefetching and Caching for Parallel I/O Systems, IEEE Transactions on Computers, v.51 n.11, p.1333-1344, November 2002
Michael Penner , Viktor K. Prasanna, Cache-Friendly implementations of transitive closure, Journal of Experimental Algorithmics (JEA), 11, 2006
Kai Hwang , Hai Jin , Roy S.C. Ho, Orthogonal Striping and Mirroring in Distributed RAID for I/O-Centric Cluster Computing, IEEE Transactions on Parallel and Distributed Systems, v.13 n.1, p.26-44, January 2002 | external memory;competitive ratio;prefetching;algorithms;multiple-disk systems;buffer management;caching;Parallel I/O |
331224 | Universal Constructions for Large Objects. | AbstractWe present lock-free and wait-free universal constructions for implementing large shared objects. Most previous universal constructions require processes to copy the entire object state, which is impractical for large objects. Previous attempts to address this problem require programmers to explicitly fragment large objects into smaller, more manageable pieces, paying particular attention to how such pieces are copied. In contrast, our constructions are designed to largely shield programmers from this fragmentation. Furthermore, for many objects, our constructions result in lower copying overhead than previous ones. Fragmentation is achieved in our constructions through the use of load-linked, store-conditional, and validate operations on a large multiword shared variable. Before presenting our constructions, we show how these operations can be efficiently implemented from similar one-word primitives. | Introduction
This paper extends recent research on universal lock-free and wait-free constructions of
shared objects [3, 4]. Such constructions can be used to implement any object in a lock-free
or a wait-free manner, and thus can be used as the basis for a general methodology
for constructing highly-concurrent objects. Unfortunately, this generality often comes
at a price, specifically space and time overhead that is excessive for many objects. A
particular source of inefficiency in previous universal constructions is that they require
processes to copy the entire object state, which is impractical for large objects. In this
paper, we address this shortcoming by presenting universal constructions that can be used
to implement large objects with low space overhead.
We take as our starting point the lock-free and wait-free universal constructions presented
by Herlihy in [4]. In these constructions, operations are implemented using "retry
loops". In Herlihy's lock-free universal construction, each process's retry loop consists of
the following steps: first, a shared object pointer is read using a load-linked (LL) opera-
tion, and a private copy of the object is made; then, the desired operation is performed
on the private copy; finally, a store-conditional (SC) operation is executed to attempt to
"swing" the shared object pointer to point to the private copy. The SC operation may
fail, in which case these steps are repeated. This algorithm is not wait-free because the
SC of each loop iteration may fail. To ensure termination, Herlihy's wait-free construction
employs a "helping" mechanism, whereby each process attempts to help other processes
by performing their pending operations together with its own. This mechanism ensures
Work supported, in part, by NSF contract CCR 9216421, and by a Young Investigator Award from
the U.S. Army Research Office, grant number DAAHO4-95-1-0323.
that if a process is repeatedly unsuccessful in swinging the shared object pointer, then it
is eventually helped by another process (in fact, after at most two loop iterations).
As Herlihy points out, these constructions perform poorly if used to implement large
objects. To overcome this problem, he presents a lock-free construction in which a large
object is fragmented into blocks linked by pointers. In this construction, operations are
implemented so that only those blocks that must be accessed or modified are copied.
Herlihy's lock-free approach for implementing large objects suffers from three short-
comings. First, the required fragmentation is left to the programmer to determine, based
on the semantics of the implemented object. The programmer must also explicitly determine
how copying is done. Second, Herlihy's approach is difficult to apply in wait-free
implementations. In particular, directly combining it with the helping mechanism of
his wait-free construction for small objects results in excessive space overhead. Third,
Herlihy's large-object techniques reduce copying overhead only if long ``chains'' of linked
blocks are avoided. Consider, for example, a large shared queue that is fragmented as a
linear sequence of blocks (i.e., in a linked list). Replacing the last block actually requires
the replacement of every block in the sequence. In particular, linking in a new last block
requires that the pointer in the previous block be changed. Thus, the next-to-last block
must be replaced. Repeating this argument, it follows that every block must be replaced.
Our approach for implementing large objects is also based upon the idea of fragmenting
an object into blocks. However, it differs from Herlihy's in that it is array-based rather
than pointer-based, i.e., we view a large object as a long array that is fragmented into
blocks. Unlike Herlihy's approach, the fragmentation in our approach is not visible to
the user. Also, copying overhead in our approach is often much lower than in Herlihy's
approach. For example, we can implement shared queues with constant copying overhead.
Our constructions are similar to Herlihy's in that operations are performed using retry
loops. However, while Herlihy's constructions employ only a single shared object pointer,
we need to manage a collection of such pointers, one for each block of the array. We
deal with this problem by employing LL, SC, and validate (VL) operations that access
a "large" shared variable that contains all block pointers. This large variable is stored
across several memory words. 1 In the first part of the paper, we show how to efficiently
implement them using the usual single-word LL, SC, and VL primitives. We present two
such implementations, one in which LL may return a special value that indicates that a
subsequent SC will fail - we call this a weak-LL - and another in which LL has the
usual semantics. In both implementations, LL and SC on a W-word variable take O(W )
time and VL takes constant time. The first of these implementations is simpler than the
second because weak-LL does not have to return a consistent multi-word value in the case
of interference by a concurrent SC. Also, weak-LL can be used to avoid unnecessary work
in universal algorithms (there is no point performing private updates when a subsequent
SC is certain to fail). For these reasons, we use weak-LL in our universal constructions.
Our wait-free universal construction is the first such construction to incorporate techniques
for implementing large objects. In this construction, we impose an upper bound on
the number of private blocks each process may have. This bound is assumed to be large
enough to accommodate any single operation. The bound affects the manner in which
processes may help one another. Specifically, if a process attempts to help too many other
processes simultaneously, then it runs the risk of using more private space than is avail-
able. We solve this problem by having each process help as many processes as possible
with each operation, and by choosing processes to help in such a way that all processes
1 The multi-word operations considered here access a single variable that spans multiple words. Thus,
they are not the same as the multi-word operations considered in [1, 2, 5, 6], which access multiple
variables, each stored in a separate word. The multi-word operations we consider admit simpler and
more efficient implementations than those considered in [1, 2, 5, 6].
shared
initially initial value of the implemented variable V
private
initially
proc Long Weak LL(var r
of wordtype) returns 0::N
1: curr := LL(X);
for
2: r[i] := BUF [curr :pid ; curr :tag ][i]
3: if VL(X) then return N
4: else return X .pid fi
proc Long SC (val
returns boolean
4:
for
5: BUF [p; j][i] := val [i]
return SC (X; (p; j))
Figure
1: W-word weak-LL and SC using 1-word LL, VL, and SC. W-word VL is implemented
by validating X.
are eventually helped. If enough space is available, all processes can be helped by one
process at the same time - we call this parallel helping. Otherwise, several "rounds" of
helping must be performed, possibly by several processes - we call this serial helping.
The tradeoff between serial and parallel helping is one of time versus space.
The remainder of this paper is organized as follows. In Section 2, we present implementations
of the LL, SC, and VL operations for large variables discussed above. We then
present our lock-free and wait-free universal constructions and preliminary performance
results in Section 3. We end the paper with concluding remarks in Section 4. Due to
space limitations, we defer detailed proofs to the full paper.
2 LL and SC on Large Variables
In this section, we implement LL, VL, and SC operations for a W-word variable V ,
using the standard, one-word LL, VL, and SC operations. 2 We first present
an implementation that supports only the weak-LL operation described in the previous
section. We then present an implementation that supports a LL operation with the usual
semantics. In the latter implementation, LL is guaranteed to return a "correct" value
of V , even if a subsequent SC operation will fail. Unfortunately, this guarantee comes
at the cost of higher space overhead and a more complicated implementation. In many
applications, however, the operation suffices. In particular, in most lock-free and
wait-free universal constructions (including ours), LL and SC are used in pairs in such a
way that if a SC fails, then none of the computation since the preceding LL has any effect
on the object. By using weak-LL, we can avoid such unnecessary computation.
2.1 Weak-LL, VL, and SC Operations for Large Variables
We begin by describing the implementation of weak-LL, VL, and SC shown in Figure 1. 3
The Long Weak LL and Long SC procedures implement weak-LL and SC operations on a
W-word variable V. Values of V are stored in "buffers", and a shared variable X indicates
which buffer contains the "current" value of V. The current value is the value written
We assume that the SC operation does not fail spuriously. As shown in [1], a SC operation that does
not fail spuriously can be efficiently implemented using LL and a SC operation that might fail spuriously.
3 Private variables in all figures are assumed to retain their values between procedure calls.
to V by the most recent successful SC operation, or the initial value of V if there is no
preceding successful SC. The VL operation for V is implemented by simply validating X.
A SC operation on V is achieved by writing the W-word variable to be stored into a
buffer, and by then using a one-word SC operation on X to make that buffer current. To
ensure that a SC operation does not overwrite the contents of the current buffer, the SC
operations of each process p alternate between two buffers, BUF [p; 0] and BUF [p; 1].
A process p performs a weak-LL operation on V in three steps: first, it executes a
one-word LL operation on X to determine which buffer contains the current value of
second, it reads the contents of that buffer; third, it performs a VL on X to check whether
that buffer is still current. If the VL succeeds, then the buffer was not modified during
p's read, and the value read by p from that buffer can be safely returned. If the VL fails,
then the weak-LL rereads X in order to determine the ID of the last process to perform a
successful SC; this process ID is then returned. We call the process whose ID is returned
a witness of the failed weak-LL. As we will see in Section 3.2, the witness of a failed
can provide useful state information that held "during" the execution of that
weak-LL. Note that if the VL of line 3 fails, then the buffer read by p is no longer current,
and hence a subsequent SC by p will fail. This implementation yields the following result.
Theorem 1: Weak-LL, VL, and SC operations for a W-word variable can be implemented
using LL, VL, and SC operations for a one-word variable with time complexity O(W ),
O(1), and O(W ), respectively, and space complexity O(NW ). 2
2.2 LL, VL, and SC Operations for Large Variables
We now show how to implement LL and SC with the "usual" semantics. Although the
operation implemented above is sufficient for our constructions, other uses of
"large" LL and SC might require the LL operation to always return a correct value from
V . This is complicated by the fact that all W words of V cannot be accessed atomically.
Our implementation of LL, VL, and SC operations for a W-word variable V is shown in
Figure
2. Like the previous implementation, this one employs a shared variable X, along
with a set of buffers. Also, a shared array A of "tags" is used for buffer management.
Buffer management differs from that described in the previous subsection in several
respects. First, each process p now has 4N
instead of just two. Another difference is that each buffer now contains more information,
specifically an old value of V , a new value of V , and two control bits. The control bits are
used to detect concurrent read/write conflicts. These bits, together with the tags in array
A, are employed to ensure that each LL returns a correct value, despite any interference.
Figure
shows two procedures, Long LL and Long SC , which implement LL and SC
operations on V, respectively. As before, a VL on V is performed by simply validating X.
The Long LL procedure is similar to the Long Weak LL procedure, except that, in the
event that the VL of X fails, more work is required in order to determine a correct return
value. The buffer management scheme employed guarantees the following two properties.
(i) A buffer cannot be modified more than once while some process reads that buffer.
(ii) If a process does concurrently read a buffer while it is being written, then that process
obtains a correct value either from the old field or from the new field of that buffer.
In the full paper, we prove both properties formally. We now describe the implementation
shown in Figure 2 in more detail, paying particular attention to (i) and (ii).
In describing the Long LL procedure, we focus on the code that is executed in the event
that the VL of X fails, because it is this code that distinguishes the Long LL from the
Long Weak LL procedure of the previous subsection. If a process p executes the Long LL
shared
initially initial value of L
private var val1 ,
initially tag 0 is the "last tag sucessfully SC 'd''
proc Long LL() returns
of wordtype
1: curr := LL(X);
for
2: val1 [i] := BUF [curr :pid ; curr :tag ]:new [i]
3: if VL(X) then return val1
else
4: curr :=
5: A[p] := curr ;
for
7: bit := BUF [curr :pid ; curr :tag ]:b;
for
8: val2 [i] := BUF [curr :pid ; curr :tag ]:old [i]
9: if BUF [curr :pid ; curr :tag
return val2 else return val1
proc Long SC (newval
of wordtype)
10: read A[j];
11: select diff
(flast N tags readg [
flast N tags selectedg [
flast tag successfully SC 'dg);
12: if :VL(X) then return false fi;
13: bit := :BUF [p; diff ]:c;
14: BUF [p; diff ]:c := bit ;
for
15: BUF [p; diff ]:old [i] := val1 [i]
for
17: BUF [p; diff ]:new [i] := newval [i]
return SC (X; (p; diff
Figure
2: W-word LL and SC using 1-word LL, VL, and SC . W-word VL is trivially implemented
by validating X.
procedure and its VL of X fails, then p might have read a corrupt value from the buffer
due to a concurrent write. In order to obtain a correct return value, p reads X again to
ascertain the current buffer, and then reads the entire contents of that buffer: new, b,
old , and c. The fields within a buffer are written in the reverse of the order in which they
are read in the Long LL procedure. Thus, by property (i), p's read can ``cross over'' at
most one concurrent write by another process. By comparing the values it reads from the
b and c fields, p can determine whether the crossing point (if any) occurred while p read
the old field or the new field. Based on this comparison, p can choose a correct return
value. This is the essence of the formal proof required to establish property (ii) above.
In describing the Long SC procedure, we focus on the buffer selection mechanism -
once a buffer has been selected, this procedure simply updates the old , new , b, and c fields
of that buffer as explained above. The primary purpose of the buffer selection mechanism
is to ensure that property (i) holds. Each time a process p executes Long SC, it reads the
tag value written to A[r] by some process r (line 10). The tag values are read from the
processes in turn, so after N SC operations on V , p has read a tag from each process.
Process p selects a buffer for its SC by choosing a new tag (line 11). The new tag is
selected to differ from the last N tags read by p from A, to differ from the last N tags
selected by p, and to differ from the last tag used in a successful SC by p. The last of
these three conditions ensures that p does not overwrite the current buffer, and the first
two conditions ensure that property (i) holds. We explain below how tags are selected.
First, however, we explain why the selection mechanism ensures property (i).
Observe that, if process q's VL of X (line before reading from one of p's
proc Read Tag(v)
delete(Read
else
delete(Select
y := dequeue(Read Q);
Last Q then
enqueue(Select
proc Store Tag(v)
delete(Select
enqueue(Last
y := dequeue(Last Q);
enqueue(Select
proc Select Tag()
returns
y := dequeue(Select Q);
enqueue(Select
return y
Figure
3: Pseudo-code implementations of operations on tag queues.
buffers BUF [p; v] (lines 6 to 9), q writes (p; v) to A[q] (line 5). If p selects and modifies
BUF [p; v] while process q is reading BUF [p; v], then p does not select BUF [p; v] again
for any of its next N SC operations. Thus, before p selects BUF [p; v] again, p reads A[q]
(line 10). As long as (p; v) remains in A[q], it will be among the last N tags read by p,
and hence p will not select BUF [p; v] to be modified. Therefore, property (i) holds.
We conclude this subsection by describing how the tag selection in line 11 can be
efficiently implemented. To accomplished this, each process maintains three local queues
Last , and Select . The Read queue records the last N tags read and the Last
queue records the last tag successfully written (using SC) to X. All other tags reside in
the Select queue, from which new tags are selected.
The tag queues are maintained by means of the Read Tag , Store Tag , and Select Tag
procedures shown in Figure 3. In these procedures, enqueue and dequeue denote the
normal queue operations, delete(Q; v) removes tag v from Q (and does not modify Q if v
is not in Q), and x 2 Q holds iff tag x is in queue Q.
Process p selects a tag (line 11 of Figure 2) by calling Select Tag . Select Tag moves the
front tag in p's Select queue to the back, and returns that tag. If that tag is subsequently
written to X by a successful SC operation (line 18), then p calls Store Tag to move the
tag from the Select queue to the Last queue. The tag that was previously in the Last
queue is removed and, if it is not in the Read queue, is returned to the Select queue.
When process p reads a tag (p; v) (line 10), it calls Read Tag to record that this tag
was read. If (p; v) is already in the Read queue, then Read Tag simply moves (p; v) to the
end of the Read queue. If (p; v) is not already in the Read queue, then it is enqueued into
the Read queue and removed from the Select queue, if necessary. Finally, the tag at the
front of the Read queue is removed because it is no longer one of the last N tags read. If
that tag is also not the last tag written to X, then it is returned to the Select queue.
The Read queue always contains the last N tags read, and the Last queue always
contains the last tag successfully written to X. Thus, the tag selected by Select Tag is
certainly not the last tag successfully written to X, nor is it among the last N tags read.
In the full paper, we show that maintaining a total of 4N tags ensures that the tag
selected is also not one of the last N tags selected, as required.
By maintaining a static index table that allows each tag to be located in constant
time, and by representing the queues as doubly-linked lists, all of the queue operations
described above can be implemented in constant time. Thus, we have the following result.
Theorem 2: LL, VL, and SC operations for a W-word variable can be implemented
using LL, VL, and SC operations for a one-word variable with time complexity O(W ),
O(1), and O(W ), respectively, and space complexity O(N 2 W ). 2
blocks
Bank of pointers to current blocks Process p's replacement pointers
Process p's replacement
for last object block
MEM array made up
of S-word blocks
Figure
4: Implementation of the MEM array for large object constructions.
3 Large Object Constructions
In this section, we present our lock-free and wait-free universal constructions for large
objects. We begin with a brief overview of previous constructions due to Herlihy [4].
Herlihy presented lock-free and wait-free universal constructions for "small" objects
as well as a lock-free construction for "large" objects [4]. As described in Section 1,
an operation in Herlihy's small-object constructions copies the entire object, which can
be a severe disadvantage for large objects. In Herlihy's large-object construction, the
implemented object is fragmented into blocks, which are linked by pointers. With this
modification, the amount of copying performed by an operation can often be reduced by
copying only those blocks that are affected by the operation. However, because of this
fragmentation, a significant amount of creative work on the part of the sequential object
designer is often required before the advantages of Herlihy's large-object construction can
be realized. Also, this approach provides no advantage for common objects such as the
queue described in Section 1. Finally, Herlihy did not present a wait-free construction
for large objects. Our lock-free and wait-free universal constructions for large objects
are designed to overcome all of these problems. These constructions are described next
in Sections 3.1 and 3.2, respectively. In Section 3.3, we present performance results
comparing our constructions to Herlihy's.
3.1 Lock-Free Universal Construction for Large Objects
Our lock-free construction is shown in Figure 5. In this construction, the implemented
object is stored in an array. Unlike Herlihy's small-object constructions, the array is not
actually stored in contiguous locations of shared memory. Instead, we provide the illusion
of a contiguous array, which is in fact partitioned into blocks. An operation replaces only
the blocks it modifies, and thus avoids copying the whole object. Before describing the
code in Figure 5, we first explain how the illusion of a contiguous array is provided.
Figure
4 shows an array MEM , which is divided into B blocks of S words each.
Memory words MEM [0] to MEM [S \Gamma 1] are stored in the first block, words MEM [S] to
are stored in the second block, and so on. A bank of pointers, one to each
block of the array, is maintained in order to record which blocks are currently part of the
array. In order to change the contents of the array, an operation makes a copy of each
block to be changed, and then attempts to update the bank of pointers by installing new
shared of pointers to array blocks
Array and copy blocks
initially (kth block of initial value))
private var oldlst , copy
initially
returns wordtype
return BLK [ptrs [addr div S]][addr mod S]
blkidx := addr div index from address
if :dirty [blkidx Haven't changed this block before
dirty [blkidx Record that block is changed
memcpy(BLK [copy [dirtycnt ]]; BLK [ptrs [blkidx ]]; sizeof old block to new
oldlst [dirtycnt ]; ptrs [blkidx ]; dirtycnt := ptrs [blkidx ]; copy [dirtycnt ]; dirtycnt
new block, record old block, prepare for next one
[ptrs [blkidx ]][addr mod S] := val = Write new value
proc LF Op(op: optype; pars : paramtype)
while true do = Loop until operation succeeds
1: if Long Weak LL(BANK object pointer
for do dirty [i] := false od; dirtycnt := copied yet
2: ret := Perform operation on object
3: if dirtycnt unnecessary SC
4: if Long SC (BANK ; ptrs) then = Operation is successful, reclaim old blocks
for i := 0 to dirtycnt \Gamma 1 do copy [i] := oldlst [i] od;
return ret
od
Figure
5: Lock-free implementation for a large object.
pointers for the changed blocks; the other pointers are left unchanged. This is achieved
by using the weak-LL and SC operations for large variables presented in Section 2.1. 4 In
Figure
4, process p is preparing to modify a word in the last block, but no others. Thus,
the bank of pointers to be written by p is the same as the current bank, except that the
last pointer points to p's new last block.
When an operation by process p accesses a word in the array, say MEM [x], the block
that currently contains MEM [x] must be identified. If p's operation modifies MEM [x],
then p must replace that block. In order to hide the details of identifying blocks and of
replacing modified blocks, some address translation and record-keeping is necessary. This
work is performed by special Read and Write procedures, which are called by the sequential
operation in order to read or write the MEM array. As a result, our constructions
are not completely transparent to the sequential object designer. For example, instead of
writing "MEM [1] := MEM [10]", the designer would write "Write(1; Read(10))". However,
as discussed in Section 4, a preprocessor could be used to provide complete transparency.
We now turn our attention to the code of Figure 5. In this figure, BANK is a B-
word shared variable, which is treated as an array of B pointers (actually indices into
the BLK array), each of which points to a block of S words. Together, the B blocks
pointed to by BANK make up the implemented array MEM . We assume an upper bound
T on the number of blocks modified by any operation. Therefore, in addition to the B
4 An extra parameter has been added to the procedures of Section 2.1 to explicitly indicate which
shared variable is updated.
blocks required for the object, T "copy blocks" are needed per process, giving a total of
blocks. These blocks are stored in the BLK array. Although blocks BLK [NT
to BLK [NT are the initial array blocks, and BLK [pT ] to BLK [(p are
process p's initial copy blocks, the roles of these blocks are not fixed. In particular, if p
replaces a set of array blocks with some of its copy blocks as the result of a successful SC,
then p reclaims the replaced array blocks as copy blocks. Thus, the copy blocks of one
process may become blocks of the array, and later become copy blocks of another process.
Process p performs a lock-free operation by calling the LF Op procedure. The loop in
the LF Op procedure repeats until the SC at line 3 succeeds. In each iteration, process p
first reads BANK into a local variable ptrs using a B-word weak-LL. Recall from Section
2.1 that the weak-LL can return a process identifier from f0; :::; N \Gamma 1g if the following SC
is guaranteed to fail. In this case, there is no point in attempting to apply p's operation,
so the loop is restarted. Otherwise, p records in its dirty array that no block has yet been
modified by its operation, and initializes the dirtycnt counter to zero.
Next, p calls the op procedure provided as a parameter to LF Op. The op procedure
performs the sequential operation by reading and writing the elements of the MEM array.
This reading and writing is performed by invoking the Read and Write procedures shown
in
Figure
5. The Read procedure simply computes which block currently contains the
word to be accessed, and returns the value from the appropriate offset within that block.
The Write procedure performs a write to a word of MEM by computing the index blkidx
of the block containing the word to be written. If it has not already done so, the Write
procedure then records that the block is "dirty" (i.e., has been modified) and copies the
contents of the old block to one of p's copy blocks. Then, the copy block is linked into
p's ptrs array, making that block part of p's version of the MEM array, and the displaced
old block is recorded in oldlst for possible reclaiming later. Finally, the appropriate word
of the new block is modified to contain the value passed to the Write procedure.
If BANK is not modified by another process after p's weak-LL, then the object contained
in p's version of the MEM array (pointed to by p's ptrs array) is the correct result
of applying p's operation. Therefore, p's SC successfully installs a copy of the object with
p's operation applied to it. After the SC, p reclaims the displaced blocks (recorded in
oldlst) to replace the copy blocks it used in performing its operation. On the other hand,
if another process does modify BANK between p's weak-LL and SC, then p's SC fails. In
this case, some other process completes an operation, so the implementation is lock-free.
Before concluding this subsection, one further complication bears mentioning. If the
BANK variable is modified by another process while p's sequential operation is being
executed, then it is possible for p to read inconsistent values from the MEM array. Observe
that this does not result in p installing a corrupt version of the object, because p's
subsequent SC fails. However, there is a risk that p's sequential operation might cause an
error, such as a division by zero or a range error, because it reads an inconsistent state of
the object. This problem can be solved by ensuring that, if BANK is invalidated, control
returns directly from the Read procedure to the LF Op procedure, without returning to
the sequential operation. The Unix longjmp command can be used for this purpose. The
details are omitted from Figure 5. In the full paper, we prove the following.
Theorem 3: Suppose a sequential object OBJ can be implemented in an array of B
S-word blocks such that any operation modifies at most T blocks and has worst-case
time complexity C. Then, OBJ can be implemented in a lock-free manner with space
overhead 5 O(NB +NTS) and contention-free time complexity O(B
It is interesting to compare these complexity figures to those of Herlihy's lock-free
5 By space overhead , we mean space complexity beyond that required for the sequential object.
construction. Consider the implementation of a queue. By storing head and tail "pointers"
(actually, array indices, not pointers) in a designated block, an enqueue or dequeue can be
performed in our construction by copying only two blocks: the block containing the head
or tail pointer to update, and the block containing the array slot pointed to by that pointer.
Space overhead in this case is O(NB + NS), which should be small when compared to
O(BS), the size of the queue. Contention-free time complexity is O(B
is only O(B greater than the time for a sequential enqueue or dequeue. In contrast,
as mentioned in Section 1, each process in Herlihy's construction must actually copy the
entire queue, even when using his large-object techniques. Thus, space overhead is at least
N times the worst-case queue length,
NBS). Also, contention-free time complexity
is
since\Omega\Gamma BS) time is required to copy the entire queue in the worst case.
When implementing a balanced tree, both constructions require space overhead of
O(N log(BS)) for local blocks. However, we pay a logarithmic time cost only when
performing an operation whose sequential counterpart modifies a logarithmic number of
array slots. In contrast, Herlihy's construction entails a logarithmic time cost for copying
for almost every operation - whenever some block is modified, a chain of block pointers
must be updated from that block to the block containing the root of the tree.
3.2 Wait-Free Construction for Large Objects
Our wait-free construction for large objects is shown in Figure 6. As in the lock-free
construction presented in the previous subsection, this construction uses the Read and
Write procedures in Figure 5 to provide the illusion of a contiguous array. The principal
difference between our lock-free and wait-free constructions is that processes in the wait-free
construction "help" each other in order to ensure that each operation by each process
is eventually completed. To enable each process to perform the operation of at least one
other process together with its own, each process p now has M 2T private copy blocks.
(Recall that T is the maximum number of blocks modified by a single operation.)
The helping mechanism used in our wait-free, large-object construction is similar to
that used in Herlihy's wait-free, small-object construction in several respects. To enable
processes to perform each others' operations, each process q begins by ``announcing''
its operation and parameters in ANC [q] (line 11 in Figure 6). Also, each process stores
sufficient information with the object to allow a helped process to detect that its operation
was completed and to determine the return value of that operation. This information also
ensures that the operation helped is not subsequently reapplied.
There are also several differences between our helping mechanism and Herlihy's. First,
in Herlihy's construction, each time a process performs an operation, it also performs the
pending operations of all other processes. However, in our construction, the restricted
amount of private copy space might prevent a process from simultaneously performing
the pending operations of all other processes. Therefore, in our construction, each process
helps only as many other processes as it can with each operation. In order to ensure that
each process is eventually helped, a help counter is added to the shared variable BANK
used in our lock-free construction. The help field indicates which process should be helped
next. Each time process p performs an operation, p helps as many processes as possible
starting from the process stored in the help field. This is achieved by helping processes
until too few private copy blocks remain to accommodate another operation (lines 22 to
24). (Recall that the Write procedure in Figure 5 increments dirtycnt whenever a new
block is modified.) Process p updates the help field so that the next process to successfully
perform an operation starts helping where p stops.
Our helping mechanism also differs from Herlihy's in the way a process detects the
completion of its operation. In Herlihy's construction, completion is detected by means
applied , copied : boolean end
shared
Announce array
Blocks for operation return values
Last RET block updated by each process
initially
private var oldlst , copy
match, done, bit , a, loop: boolean; applyop:
initially
1: match := ANC [q ]:bit ;
2: if RET [rb][q ]:applied 6= match then
3: applyop := ANC [q ]:op;
4: applypars := ANC [q ]:pars ;
5: RET [rb][q ]:val := applyop(applypars );
]:applied := match
proc Return Block() returns 0::N
7: tmp := Long Weak LL(BANK ; ptrs);
8: if tmp 6= N then
9: return LAST [tmp]
else
10: return ptrs :ret
proc WF Op(op: optype; pars : paramtype)
Announce operation
12: b; done := Return Block (); false ;
13: while :done " RET [b][p]:copied 6= bit do = Loop until update succeeds or operation is helped
14: if Long Weak LL(BANK object pointers
15: for i := 0 to do dirty [i] := false od; dirtycnt := modified yet
old return block and install new one
17: memcpy(RET [rb]; RET [oldrb ]; sizeof private copy of return block
for applied operations
19: a := RET [rb][j]:applied ;
20: RET [rb][j]:copied := a
Apply own operation
22: while dirtycnt sufficient space remains
24: try := try ptrs :help then loop := true fi
25: LAST [p]; ptrs :help := rb; try which return block was modified
Operation is successful, reclaim old blocks
do copy [m] := oldlst [m] od;
28: RET [rb][p]:copied ; rb; done := bit copied bit for next time
current or recent return block
30: return RET [b][p]:val = Get return value of operation
Figure
Wait-free implementation for a large object.
of a collection of toggle bits, one for each process, that are stored with the current version
of the object. Before attempting to apply its operation, each process p first "announces"
a new toggle bit value. When another process helps p, it copies this bit value into the
current version of the object. To detect the completion of its operation, p tests whether
the bit value stored for it in the current version of the object matches the bit value it
previously announced; to access the current version of the object, p first reads the shared
object pointer, and then reads the buffer pointed to by that pointer. In order to avoid
a race condition that can result in an operation returning an incorrect value, Herlihy's
construction requires this sequence of reads to be performed twice. This race condition
arises when p attempts to access the current buffer, and during p's access, another process
subsequently reclaims that buffer and privately updates it. By dereferencing the object
pointer and checking its toggle bit a second time, p can ensure that if the first buffer it
accessed has been reclaimed, then p's operation has already been applied. This is because
the process that reclaimed the buffer helped all other processes with its operation, and
therefore ensured that p's operation was applied. Because our construction does not
guarantee that each process helps all other processes at once, p might have to reread the
shared object pointer and read its toggle bit many times to ensure that its operation has
been applied. We therefore use a different mechanism, explained below, for determining
whether an operation has been applied.
To enable a process to detect that its operation has been applied, and to determine
the return value of the operation, we use a set of "return" blocks. There are N
blocks RET [0] to RET [N ]; at any time, one of these blocks is "current" (as indicated by
a new ret field in the BANK variable) and each process "owns" one of the other return
blocks. The current return block contains, for each process q, the return value of q's
most recent operation, along with two bits: applied and copied . These bits are used by
q to detect when its operation has been completed. Roughly speaking, the applied bit
indicates that q's operation has been applied to the object and the copied bit indicates that
another operation has been completed since q's operation was applied. The interpretation
of these bits is determined by ANC [q]:bit . For example, q's operation has been applied
iff q's applied bit in the current return block equals ANC [q]:bit .
To see why two bits are needed to detect whether q's operation is complete, consider
the scenario in Figure 7. In this figure, process p performs two operations. In the first, p's
SC is successful, and p replaces RET [5] with RET [3] as the current return block at line 26.
During p's first operation, q starts an operation. However, q starts this operation too late
to be helped by p. Before p's execution of line 26, q reads BANK in line 7 and determines
that RET [5] is the current return block. Now, p starts a second operation. Because p
previously replaced RET [5] as the current return block, RET [5] is now p's private copy,
so p's second operation uses RET [5] to record the operations it helps. When p executes
line 6, it changes q's applied bit to indicate that it has applied q's operation. Note that,
at this stage, q's operation has only been applied to p's private object copy, and p has not
yet performed its SC. However, if q reads the applied bit of RET [5] (which it previously
determined to be the current RET block) at line 13, then q incorrectly concludes that its
operation has been applied to the object, and terminates prematurely.
It is similarly possible for q to detect that its copied bit in some return block RET [b]
equals ANC [q]:bit before the SC (if any) that makes RET [b] current. However, because
q's copied bit is updated only after its applied bit has been successfully installed as part
of the current return block, it follows that some process must have previously applied q's
operation. Thus, q terminates correctly in this case (see line 13).
It remains to describe how process q determines which return block contains the current
state of q's operation. It is not sufficient for q to perform a weak-LL on BANK and read the
ret field, because the weak-LL is not guaranteed to return a value of BANK if a successful
14 26 14 6
ANC[q].bit
RET[5][q].applied
Figure
7: Process q prematurely detects that its applied bit equal ANC [q]:bit .
SC operation interferes. In this case, the weak-LL returns the ID of a "witness" process
that performs a successful SC on BANK during the weak-LL operation. In preparation
for this possibility, process p records the return block it is using in LAST [p] (line 25)
before attempting to make that block current (line 26). When q detects interference from
a successful SC, q uses the LAST entry of the witness process to determine which return
block to read. The LAST entry contains the index of a return block that was current
during q's weak-LL operation. If that block is subsequently written after being current,
then it is a copy of a more recent current return block, so its contents are still valid. Our
wait-free construction gives rise to the following result.
Theorem 4: Suppose a sequential object OBJ whose return values are at most R words
can be implemented in an array of B S-word blocks such that any operation modifies at
most T blocks and has worst-case time complexity C. Then, for any M 2T , OBJ can
be implemented in a wait-free manner with space overhead O(N(NR
worst-case time complexity O(dN= min(N; bM=T c)e(B +N(R
3.3 Performance Comparison
In this subsection, we describe the results of preliminary experiments that compare the
performance of Herlihy's lock-free construction for large objects to our two constructions
on a 32-processor KSR-1 multiprocessor.
The results of one set of experiments are shown in Figure 8. In these experiments,
LL and SC primitives were implemented using native KSR locks. Each of processors
performed 1000 enqueues and 1000 dequeues on a shared queue. For testing our
constructions, we chose B (the number of blocks) and S (the size of each block) to be
approximately the square root of the total object size. Also, we chose because each
queue operation accesses only two words. For the wait-free construction, we chose
This is sufficient to guarantee that each process can help at least one other operation. In
fact, because two consecutive enqueue (or dequeue) operations usually access the same
choosing sufficient to ensure that a process often helps all other processes
each time it performs an operation. These choices for M and T result in very low space
overhead compared to that required by Herlihy's construction.
As expected, both our lock-free and wait-free constructions significantly outperform
Herlihy's construction as the queue size grows. This is because an operation in Herlihy's
construction copies the entire object, while ours copy only small parts of the object.
It is interesting to note that our wait-free construction outperforms our lock-free one.
6 It can be shown that each successful operation is guaranteed to advance the help pointer by
min(N; bM=T c). Thus, if process p's SC fails dN= min(N; bM=T c)e times, then p's operation is helped.
When considering these bounds, note that for many objects, R is a small constant. Also, for queues, C
and T are constant, and for balanced trees, C and T are logarithmic in the size of the object.
Time
forenqueues
anddequeues
Queue Size
Comparison of Large Object Constructions for a Shared Queue
"Lock_Free"
"Wait_Free"
"Herlihy_Lock_Free"
"Herlihy_Lock_Free_Backoff"
Figure
8: Performance experiments on KSR.
We believe that this is because the cost of recopying blocks in the event that a SC fails
dominates the cost of helping. It is also interesting to note that exponential backoff does
not significantly improve the performance of Herlihy's lock-free construction. This stands
in contrast to Herlihy's experiments on small objects, where exponential backoff played an
important role in improving performance. We believe that this is because the performance
of Herlihy's large object construction is dominated by copying and not by contention.
We should point out that we have deliberately chosen the queue to show the advantages
of our constructions over Herlihy's. In the full paper, we will also present an
implementation of a skew heap - the object considered by Herlihy. We expect that our
constructions will still outperform Herlihy's, albeit less dramatically, because ours will
copy a logarithmic number of blocks only when the sequential operation does; Herlihy's
will do so whenever a block near the bottom of the tree is modified.
Concluding Remarks
Our constructions improve the space and time efficiency of lock-free and wait-free implementations
of large objects. Also, in contrast to similar previous constructions, ours
do not require programmers to determine how an object should be fragmented, or how
the object should be copied. However, they do require the programmer to use special
Read and Write functions, instead of the assignment statements used in conventional
programming. Nonetheless, as demonstrated by Figure 9, the resulting code is very close
to that of an ordinary sequential implementation. Our construction could be made completely
seamless by providing a compiler or preprocessor that automatically translates
assignments to and from MEM into calls to the Read and Write functions.
The applicability of our construction could be further improved by the addition of
a dynamic memory allocation mechanism. This would provide a more convenient interface
for objects such as balanced trees, which are naturally represented as nodes that are
dynamically allocated and released. There are well-known techniques for implementing
dynamic memory management in an array. These techniques could be applied directly by
the sequential object programmer, or could be provided as a subroutine library. Several
issues arise from the design of such a library. First, the dynamic memory allocation pro-
int dequeue()
int item;
if (Read(head) == Read(tail))
return EMPTY;
return item;
int enqueue(item)
int item;
int newtail; /* int newtail; */
if (newtail == Read(head)) /* if (newtail == head) */
return FULL; /* return FULL; */
return SUCCESS; /* return SUCCESS; */
Figure
9: C code used for the queue operations. Comments show "usual" enqueue code.
cedures must modify only a small number of array blocks, so that the advantages of our
constructions can be preserved. Second, fragmentation complicates the implementation
of allocate and release procedures. These complications can make the procedures quite
inefficient, and can even cause the allocate procedure to incorrectly report that insufficient
memory is available. Both of these problems are significantly reduced if the size of allocation
requests is fixed in advance. For many objects, this restriction is of no consequence.
For example, the nodes in a tree are typically all of the same size.
Finally, our constructions do not allow parallel execution of operations, even if the
operations access disjoint sets of blocks. We would like to extend our constructions to allow
such parallel execution where possible. For example, in our shared queue implementations,
an enqueue operation might unnecessarily interfere with a dequeue operation. In [1], we
addressed similar concerns when implementing wait-free operations on multiple objects.
Acknowledgement
We would like to thank Lars Nyland for his help with the performance
studies in Section 3.3.
--R
"Universal Constructions for Multi-Object Operations"
"A Method for Implementing Lock-Free Shared Data Structures"
"Wait-Free Synchronization"
"A Methodology for Implementing Highly Concurrent Data Objects"
"Disjoint-Access-Parallel Implementations of Strong Shared Memory Primitives"
"Software Transactional Memory"
--TR
--CTR
Maged M. Michael, High performance dynamic lock-free hash tables and list-based sets, Proceedings of the fourteenth annual ACM symposium on Parallel algorithms and architectures, August 10-13, 2002, Winnipeg, Manitoba, Canada
Mark Moir, Laziness pays! using lazy synchronization mechanisms to improve non-blocking constructions, Proceedings of the nineteenth annual ACM symposium on Principles of distributed computing, p.61-70, July 16-19, 2000, Portland, Oregon, United States
Maged M. Michael, Safe memory reclamation for dynamic lock-free objects using atomic reads and writes, Proceedings of the twenty-first annual symposium on Principles of distributed computing, July 21-24, 2002, Monterey, California
Simon Doherty , Maurice Herlihy , Victor Luchangco , Mark Moir, Bringing practical lock-free synchronization to 64-bit applications, Proceedings of the twenty-third annual ACM symposium on Principles of distributed computing, July 25-28, 2004, St. John's, Newfoundland, Canada
Yuh-Jzer Joung, Asynchronous group mutual exclusion, Distributed Computing, v.13 n.4, p.189-206, November 2000
Maurice Herlihy , Victor Luchangco , Paul Martin , Mark Moir, Nonblocking memory management support for dynamic-sized data structures, ACM Transactions on Computer Systems (TOCS), v.23 n.2, p.146-196, May 2005
Hagit Attiya , Eyal Dagan, Improved implementations of binary universal operations, Journal of the ACM (JACM), v.48 n.5, p.1013-1037, September 2001 | shared objects;concurrency;wait-free;lock-free;nonblocking synchronization |
331273 | Stereo Matching with Transparency and Matting. | This paper formulates and solves a new variant of the stereo correspondence problem: simultaneously recovering the disparities, true colors, and opacities of visible surface elements. This problem arises in newer applications of stereo reconstruction, such as view interpolation and the layering of real imagery with synthetic graphics for special effects and virtual studio applications. While this problem is intrinsically more difficult than traditional stereo correspondence, where only the disparities are being recovered, it provides a principled way of dealing with commonly occurring problems such as occlusions and the handling of mixed (foreground/background) pixels near depth discontinuities. It also provides a novel means for separating foreground and background objects (matting), without the use of a special blue screen. We formulate the problem as the recovery of colors and opacities in a generalized 3D (x, y, d) disparity space, and solve the problem using a combination of initial evidence aggregation followed by iterative energy minimization. | Introduction
Stereo matching has long been one of the central research problems in computer vision. Early
work was motivated by the desire to recover depth maps and shape models for robotics and
object recognition applications. More recently, depth maps obtained from stereo have been
painted with texture maps extracted from input images in order to create realistic 3-D scenes
and environments for virtual reality and virtual studio applications [MB95, SK95, K
96]. Unfortunately, the quality and resolution of most stereo algorithms falls quite short
of that demanded by these new applications, where even isolated errors in the depth map
become readily visible when composited with synthetic graphical elements.
One of the most common errors made by most stereo algorithms is a systematic "fatten-
ing" of depth layers near occlusion boundaries. Algorithms based on variable window sizes
[KO94] or iterative evidence aggregation [SS96] can in many instances mitigate such errors.
Another common problem is that disparities are only estimated to the nearest pixel, which is
typically not su#ciently accurate for tasks such as view interpolation. Di#erent techniques
have been developed for computing sub-pixel estimates, such as using a finer set of disparity
hypotheses or finding the the analytic minimum of the local error surface [TH86, MSK89].
Unfortunately, for challenging applications such as z-keying (the insertion of graphics
between di#erent depth layers in video) [PW94, K 96], even this is not good enough.
Pixels lying near or on occlusion boundaries will typically be mixed, i.e., they will contain
blends of both foreground and background colors. When such pixels are composited with
other images or graphical elements, objectionable "halos" or "color bleeding" may be visible.
The computer graphics and special e#ects industries faced a similar problem when extracting
foreground objects using blue screen techniques [SB96]. A variety of techniques
were developed for this matting problem, all of which model mixed pixels as combinations
of foreground and background colors (the latter of which is usually assumed to be known).
Practitioners in these fields quickly discovered that it is insu#cient to merely label pixels as
foreground and background: it is necessary to simultaneously recover both the true color of
each pixel and its transparency or opacity [PD84, Bli94a].
In this paper, we develop a new, multiframe stereo algorithm which simultaneously recovers
depth, color, and transparency estimates at each pixel. Unlike traditional blue-screen
matting, we cannot use a known background color to perform the color and matte recovery.
Instead, we explicitly model a 3-D (x, y, d) disparity space, where each cell has an associated
color and opacity value. Our task is to estimate the color and opacity values which best
predict the appearance of each input image, using prior assumptions about the (piecewise-)
continuity of depths, colors, and opacities to make the problem well posed. To our knowl-
edge, this is the first time that the simultaneous recovery of depth, color, and opacity from
stereo images has been attempted.
We begin this paper with a review of previous work in stereo matching. In Section 3, we
discuss our novel representation for accumulating color samples in a generalized disparity
space. We then describe how to compute an initial estimate of the disparities (Section 4),
and how to refine this estimate by taking into account occlusions (Section 5). In Section
6, we develop a novel energy minimization algorithm for estimating disparities, colors and
opacities. We present some experiments on both synthetic and real images in Section 7. We
conclude the paper with a discussion of our results, and a list of topics for future research.
Previous Work
Stereo matching (and the more general problem of stereo-based 3-D reconstruction) are
fields with very rich histories [BF82, DA89]. In this section, we focus only on previous work
related to our central topics of interest: pixel-accurate matching with sub-pixel precision,
the handling of occlusion boundaries, and the use of more than two images. We also mention
techniques used in computer graphics to composite images with transparencies and to recover
matte (transparency) values using traditional blue-screen techniques.
In thinking about stereo algorithms, we have found it useful to subdivide the stereo
matching process into three tasks: the initial computation of matching costs, the aggregation
of local evidence, and the selection or computation of a disparity value for each pixel [SS96].
The most fundamental element of any correspondence algorithm is a matching cost that
measures the similarity of two or more corresponding pixels in di#erent images. Matching
costs can be defined locally (at the pixel level), e.g., as absolute [K + 96] or squared intensity
di#erences [MSK89], using edges [Bak80] or filtered images [JJT91, JM92]. Alternatively,
matching costs may be defined over an area, e.g., using correlation [RGH80] (this can be
viewed as a combination of the matching and aggregation stages). In this paper, we use
squared intensity di#erences.
Aggregating support is necessary to disambiguate potential matches. A support region
can either be two-dimensional at a fixed disparity (favoring fronto-parallel surfaces), or
three-dimensional in (x, y, d) space (allowing slanted surfaces). Two-dimensional evidence
aggregation has been done using both fixed square windows (traditional) and windows with
adaptive sizes [Arn83, KO94]. Three-dimensional support functions include limited disparity
gradient [PMF85], Prazdny's coherence principle [Pra85] (which can be implemented using
two di#usion processes [SH85]), local winner-take-all [YYL93], and iterative (non-linear)
evidence aggregation [SS96]. In this paper, our initial evidence aggregation uses an iterative
technique, with estimates being refined later through a prediction/adjustment mechanism
which explicitly models occlusions.
The easiest way of choosing the best disparity is to select at each pixel the minimum
aggregated cost across all disparities under consideration ("winner-take-all"). A problem
with this is that uniqueness of matches is only enforced for one image (the reference image),
while points in the other image might get matched to multiple points. Cooperative algorithms
employing symmetric uniqueness constraints are one attempt to solve this problem [MP76].
In this paper, we will introduce the concept of a virtual camera which is used for the initial
winner-take-all stage.
Occlusion is another very important issue in generating high-quality stereo maps. Many
approaches ignore the e#ects of occlusion; others try to minimize them by using a cyclopean
disparity representation [Bar89], or try to recover occluded regions after the matching by
cross-checking [Fua93]. Several authors have addressed occlusions explicitly, using Bayesian
models and dynamic programming [Arn83, OK85, BM92, Cox94, GLY92, IB94]. However,
such techniques require the strict enforcement of ordering constraints [YP84]. In this pa-
per, we handle occlusion by re-projecting the disparity space into each input image using
traditional back-to-front compositing operations [PD84], and eliminating from consideration
pixels which are known to be occluded. related technique, developed concurrently with
ours, traverses the disparity space from front to back [SD97].)
Sub-pixel (fractional) disparity estimates, which are essential for applications such as
view interpolation, can be computed by fitting a curve to the matching costs at the discrete
disparity levels [LK81, TH86, MSK89, KO94]. This provides an easy way to increase the
resolution of a stereo algorithm with little additional computation. However, to work well,
the intensities being matched must vary smoothly. In this paper, we present two di#erent
representations for fractional disparity estimates.
Multiframe stereo algorithms use more than two images to increase the stability of the
algorithm [BBM87, MSK89, KWZK95, Col96]. In this paper, we present a new framework
for formulating the multiframe stereo problem based on the concept of a virtual camera and
a projective generalized disparity space, which includes as special cases the multiple baseline
stereo models of [OK93, KWZK95, Col96].
Finally, the topic of transparent surfaces has not received much study in the context
of computational stereo [Pra85, SH85, Wei89]. Relatively more work has been done in
the context of transparent motion estimation [BBHP92, SM91b, SM91a, JBJ96, DP91].
However, these techniques are limited to extracting a small number of dominant motions or
planar surfaces. None of these techniques explicitly recover a per-pixel transparency value
along with a corrected color value, as we do in this paper.
Our stereo algorithm has also been inspired by work in computer graphics, especially in
image compositing [PD84, Bli94a] and blue screen techniques [VT93, SB96]. While traditional
blue-screen techniques assume that the background is of a known color, we solve for
the more di#cult case of each partially transparent surface pixel being the combination of
two (or more) unknown colors.
3 Disparity space representation
To formulate our (potentially multiframe) stereo problem, we use a generalized disparity
space which can be any projective sampling (collineation) of 3-D space. More concretely, we
first choose a virtual camera position and orientation. This virtual camera may be coincident
with one of the input images, or it can be chosen based on the application demands and
the desired accuracy of the results. For instance, if we wish to regularly sample a volume
of 3-D space, we can make the camera orthographic, with the camera's (x, y, d) axes being
orthogonal and evenly sampled (as in [SD97]). As another example, we may wish to use a
skewed camera model for constructing a Lumigraph [GGSC96].
Having chosen a virtual camera position, we can also choose the orientation and spacing
of the disparity planes, i.e., the constant d planes. The relationship between d and 3-D
space can be projective. For example, we can choose d to be inversely proportional to
depth, which is the usual meaning of disparity [OK93]. The information about the virtual
camera's position and disparity plane orientation and spacing can be captured in a single
which represents a collineation of 3-D space. The matrix -
M 0 can also
capture the sampling information inherent in our disparity space, e.g, if we define disparity
space (x, y, d) to be an integer valued sampling of the mapping -
point in 3-D (Euclidean) space.
An example of a possible disparity space representation is the standard epipolar geometry
for two or more cameras placed in a plane perpendicular to their optical axes, in which case
a natural choice for disparity is inverse depth (since this corresponds to uniform steps in
displacements, i.e., the quantity which can be measured accurately) [OK93].
Other choices include the traditional cyclopean camera placed symmetrically between two
verged cameras, or a uniform sampling of 3-D which is useful in a true verged multi-camera
environment [SD97] or for motion stereo. Note that in all of these situations, integral steps
in disparity may correspond to fractional shifts in displacement, which may be desirable for
optimal accuracy.
Regardless of the disparity space selected, it is always possible to project each of the input
images onto the through a simple homography (2-D perspective transform),
and to work with such re-projected (rectified) images as the inputs to the stereo algorithm.
What are the possible advantages of such a rectification step? For two or more cameras whose
optical centers are collinear, it is always possible to find a rectification in which corresponding
epipolar lines are horizontal, greatly simplifying the stereo algorithm's implementation. For
three or more cameras which are coplanar, after rectification, displacements away from the
(i.e., changes in disparity) will correspond to uniform steps along fixed directions
for each camera (e.g., horizontal and vertical under a suitable camera geometry). Finally,
for cameras in general position, steps in disparity will correspond to zooms (scalings) and
sub-pixel shifts of the rectified images, which is quicker (and potentially more accurate)
than general perspective resampling [Col96]. A potential disadvantage of pre-rectification is
a slight loss in input image quality due to multiple re-samplings, but this can be mitigated
using higher-order (e.g., bicubic) sampling filters, and potentially re-sampling the rectified
images at higher resolution. Appendix A derives the equations for mapping between input
image (both rectified and not) and disparity space.
In this paper, we introduce a generalization of the (x, y, d) space. If we consider each
of the images as being samples along a fictitious "camera" dimension, we end
up with a 4-D (x, y, d, space. In this space, the values in a given (x, y, d) cell as k varies
can be thought of as the color distributions at a given location in space, assuming that this
location is actually on the surface of the object. We will use these distributions as the inputs
to our first stage of processing, i.e., by computing mean and variance statistics. A di#erent
slice through (x, y, d, k) space, this time by fixing k, gives the series of shifted images seen
by one camera. In particular, compositing these images in a back-to-front order, taking into
account each voxel's opacity, should reconstruct what is seen by a given (rectified) input
image (see Section 5). 1
Figure
1 shows a set of sample images, together with an (x, d, slice through the 4-D
1 Note that this 4-D space is not the same as that used in the Lumigraph [GGSC96], where the description
is one of rays in 3-D, as opposed to color distributions across multiple cameras in 3-D. It is also not the same
as an epipolar-plane image (EPI) volume [BBM87], which is a simple concatenation of warped input images.
space (y is fixed at a given scanline), where color samples varying in k are grouped together.
4 Estimating an initial disparity surface
The first step in stereo matching is to compute some initial evidence for a surface existing
at (or near) a location (x, y, d) in disparity space. We do this by conceptually populating
the entire 4-D (x, y, d, space with colors obtained by resampling the K input images,
c(x, y, d,
where c k (u, v) is the kth input image, 2 H k +t k [0 0 d] is the homography mapping this image
to disparity plane d (see Appendix A), W f is the forward warping operator, 3 and c(x, y, d,
is the pixel mapped into the 4-D generalized disparity space. Algorithmically, this can be
achieved either by first rectifying each image onto the plane, or by directly using a
homography (planar perspective transform) to compute each (d, Note that at this
stage, not all (x, y, d, cells will be populated, as some of these may map to pixels which
are outside some of the input images.
Once we have a collection of color (or luminance) values at a given (x, y, d) cell, we can
compute some initial statistics over the K (or fewer) colors, e.g., the sample mean - and
variance #. 5 Robust estimates of sample mean and variance are also possible (e.g., [SS96]).
Examples of the mean and variance values for our sample image are shown in Figures 1d
and 1e, where darker values indicate smaller variances.
After accumulating the local evidence, we usually do not have enough information to
determine the correct disparities in the scene (unless each pixel has a unique color). While
2 The color values c can be replaced with gray-level intensity values without a#ecting the validity of our
analysis.
3 In our current implementation, the warping (resampling) algorithm uses bi-linear interpolation of the
pixel colors and opacities.
4 For certain epipolar geometries, even more e#cient algorithms are possible, e.g., by simply shifting along
epipolar lines [K
5 In many traditional stereo algorithms, it is common to e#ectively set the mean to be just the value in one
image, which makes these algorithms not truly multiframe [Col96]. The sample variance then corresponds
to the squared di#erence or sum of squared di#erences [OK93].
x
(a) (b) (c)
x
x
x
(d) (e) (f)
Figure
1: Sample slices through a 4-D disparity space: (a-b) sample input images (arranged
slice for scanline 17, (d) means and (e) variances as a function
of (x, d) (smaller variances are darker), (f) variances after evidence accumulation, (g) results
of winner-takes-all for whole image (undecided columns in white), (h-i) colors and opacities
at disparities 1 and 5. For easier interpretation, all images have been composited over an
opaque white background.
pixels at the correct disparity should in theory have zero variance, this is not true in the
presence of image noise, fractional disparity shifts, and photometric variations (e.g., specu-
larities). The variance may also be arbitrarily high in occluded regions, where pixels which
actually belong to a di#erent disparity level will nevertheless vote, often leading to gross
errors. For example, in Figure 1c, the middle (red) group of pixels at should all have
the same color in any given column, but they do not because of resampling errors. This
e#ect is especially pronounced near the edge of the red square, where the red color has been
severely contaminated by the background blue. This contamination is one of the reasons
why most stereo algorithm make systematic errors in the vicinity of depth discontinuities.
To help disambiguate matches, we can use local evidence aggregation. The most common
form is averaging using square windows, which results in the traditional sum of squared
di#erence (SSD and SSSD) algorithms [OK93]. To obtain results with better quality near
discontinuities, it is preferable to use adaptive windows [KO94] or iterative evidence accumulation
[SS96]. In the latter case, we may wish to accumulate an evidence measure which
is not simply summed error (e.g., the probability of a correct match [SS96]). Continuing our
simple example, Figure 1f shows the results of an evidence accumulation stage, where more
certain depths are darker. To generate these results, we aggregate evidence using a variant
of the algorithm described in [SS96],
i is the variance of pixel i at iteration t, - # t
i is a robustified (limited) version of the
variance, and N 4 are the usual four nearest neighbors. For the results in Figure 1, we use
At this stage, most stereo matching algorithms pick a winning disparity in each (x, y)
column, and call this the final correspondence map. Optionally, they may also compute a
fractional disparity value by fitting an analytic curve to the error surface around the winning
disparity and then finding its minimum [MSK89, OK93]. Unfortunately, this does nothing
to resolve several problems: occluded pixels may not be handled correctly (since they have
"inconsistent" color values at the correct disparity), and it is di#cult to recover the true
(unmixed) color values of surface elements (or their opacities, in the case of pixels near
discontinuities).
Our solution to this problem is to use the initial disparity map as the input to a refinement
stage which simultaneously estimates the disparities, colors, and opacities which best match
the input images while conforming to some prior expectations on smoothness. To start this
procedure, we initially pick only winners in each column where the answer is fairly certain,
i.e., where the variance ("scatter" in color values) is below a threshold and is a clear winner
with respect to the other candidate disparities. 6 A new (x, y, d) volume is created, where
each cell now contains a color value, initially set to the mean color computed in the first
stage, and the opacity is set to 1 for cells which are winners, and 0 otherwise. 7
Computing visibilities through re-projection
Once we have an initial (x, y, d) volume containing estimated RGBA (color and 0/1 opac-
ity) values, we can re-project this volume into each of the input cameras using the known
transformation
(see (14) in
Appendix
A), where - x 0 is a (homogeneous) coordinate in (x, y, d) space, -
M 0 is the
complete camera matrix corresponding to the virtual camera, M k is the kth camera matrix,
and x k are the image coordinates in the kth image. There are several techniques possible for
performing this projection, including classical volume rendering techniques [Lev90, LL94]. In
our approach, we interpret the (x, y, d) volume as a set of (potentially) transparent acetates
stacked at di#erent d levels. Each acetate is first warped into a given input camera's frame
6 To account for resampling errors which occur near rapid color or luminance changes, we set the threshold
proportional to the local image variation within a 3 - 3 window. In our experiments, the threshold is set to
7 We may, for computational reasons, choose to represent this volume using colors premultiplied by their
opacities (associated colors [PD84, Bli94a]), in which case voxels for which alpha (opacity) is 0 should have
their color or intensity values set to 0. See [Bli94a, Bli94b] for a discussion of the advantages of using
premultiplied colors.
using the known homography
and the layers are then composited back-to-front (this is called a shear-warp algorithm
[LL94]). 8
The resampling procedure for a given layer d into the coordinate system of camera k can
be written as
y,
is the current color and opacity estimate at a given location (x, y, d), - c k
is the resampled layer d in camera k's coordinate system, and W b is the resampling operation
induced by the homography (4). 9 Note that the warping function is linear in the colors and
opacities being resampled, i.e., the - c k (u, v, d) can be expressed as a linear function of the
c(x, y, d), e.g., through a sparse matrix multiplication.
Once the layers have been resampled, they are then composited using the standard over
operator [PD84],
where f and b are the premultiplied foreground and background colors, and # f is the opacity
of the foreground [PD84, Bli94a]. Using the over operator, we can form a composite image
d min
(note that the over operator is associative but not commutative, and that d max is the layer
closest to the camera).
After the re-projection step, we refine the disparity estimates by preventing visible surface
pixels from voting for potential disparities in the regions they occlude. More precisely, we
build an (x, y, d, visibility map, which indicates whether a given camera k can see a voxel
at location (x, y, d). A simple way to construct such a visibility map is to record the disparity
8 If the input images have been rectified, or under certain imaging geometries, this homography will be a
simple scale and/or shift (Appendix A).
9 This is the inverse of the warp specified in (1).
value d top for each (u, v) pixel which corresponds to the topmost opaque pixel seen during
the compositing step. 10 The visibility value can then be defined as
The visibility and opacity (alpha) values taken together can be interpreted as follows:
voxel visible in image k
voxel not visible in image k
A more principled way of defining visibility, which takes into account partially opaque
voxels, uses a recursive front-to-back algorithm
with the initial visibilities all being set to 1, V k (u, v, d 1. We now have a very simple
(linear) expression for the compositing operation,
d=d min
Once we have computed the visibility volumes for each input camera, we can update the
list of color samples we originally used to get our initial disparity estimates. Let
be the input color image multiplied by its visibility at disparity d. If we substitute c k (u, v, d)
for c k (u, v) in (1), we obtain a distribution of colors in (x, y, d, each color has an
associated visibility value (Figure 2c). Voxels which are occluded by surfaces lying in front in
a given view k will now have fewer (or potentially no) votes in their local color distributions.
We can therefore recompute the local mean and variance estimates using weighted statistics,
where the visibilities V (x, y, d, provide the weights (Figures 2d and 2e).
Note that it is not possible to compute visibility in (x, y, d) disparity space, as several opaque pixels in
disparity space may potentially project to the same input camera pixel.
x
(a) (b) (c)
x
x
x
(d) (e) (f)
Figure
2: After modifying input images by visibility V k (u, v, d): (a-b) re-synthesized views
of sample images, (c) (x, d, slice for scanline 17, (d) means and (e) variances as a function
of (x, d), (f) variances after evidence accumulation, (g) results of winner-takes-all for
whole image, and (h-i) colors and opacities at disparities 1 and 5 after one iteration of the
reprojection algorithm.
With these new statistics, we are now in position to refine the disparity map. In partic-
ular, voxels in disparity space which previously had an inconsistent set of color votes (large
may now have a consistent set of votes, because voxels in (partially occluded) regions
will now only receive votes from input pixels which are not already assigned to nearer
surfaces (Figure 2c-f). Figure 2g-i show the results after one iteration of this algorithm.
6 Refining color and transparency estimates
While the above process of computing visibilities and refining disparity estimates will in
general lead to a higher quality disparity map (and better quality mean colors, i.e., texture
maps), we have not yet addressed the issue of recovering true colors and transparencies in
mixed pixels, e.g., near depth discontinuities, which is one of the main goals of this research.
A simple way to approach this problem is to take the binary opacity maps produced by
our stereo matching algorithm, and to make them real-valued using a low-pass filter. Another
possibility might be to recover the transparency information by looking at the magnitude
of the intensity gradient [MYT95], assuming that we can isolate regions which belong to
di#erent disparity levels.
In our work, we have chosen instead to adjust the opacity and color values - c(x, y, d)
to match the input images (after re-projection), while favoring continuity in the color and
opacity values. This can be formulated as a non-linear minimization problem, where the
cost function has three parts:
1. a weighted error norm on the di#erence between the re-projected images - c k (u, v) and
the original (or rectified) input images c k (u, v)
where the weights w k (u, v) may depend on the position of camera k relative to the
virtual camera; 11
More precisely, we may wish to measure the angle between the viewing ray corresponding to (u, v) in
the two cameras. However, the ray corresponding to (u, v) in the virtual camera depends on the disparity d.
2. a (weak) smoothness constraint on the colors and opacities,
(x,y,d)
y, d)); (10)
3. a prior distribution on the opacities,
(x,y,d)
#(x, y, d)). (11)
In the above equations, # 1 and # 2 are either quadratic functions or robust penalty functions
[Hub81], and # is a function which encourages opacities to be 0 or 1, e.g.,
The smoothness constraint on colors makes more sense with non-premultiplied colors.
For example, a voxel lying on a depth discontinuity will be partially transparent, and yet
should have the same non-premultiplied color as its neighbors. An alternative, which allows
us to work with premultiplied colors, is to use a smoothness constraint of the form
(x,y,d)
y, d)c(x # , y # , d #(x # , y # , d # )c(x, y, d)). (12)
To minimize the total cost function
we use a preconditioned gradient descent algorithm. Appendix B contains details on how to
compute the required gradients and Hessians.
7 Experiments
To study the properties of our new stereo correspondence algorithm, we ran a small set of
experiments on some synthetic stereo datasets, both to evaluate the basic behavior of the
algorithm (aggregation, visibility-based refinement, and energy minimization), and to study
its performance on mixed (boundary) pixels. Being able to visualize opacities/transparencies
is very important for understanding and validating our algorithm. For this reason, we chose
All color and opacity values are, of course, constrained to lie in the range [0, 1], making this a constrained
optimization problem.
color stimuli (the background is blue-green, and the foreground is red). Pixels which are
partially transparent will show up as "pale" colors, while fully transparent pixels will be
white. We should emphasize that our algorithm does not require colored images as inputs
(see
Figure
5), nor does it require the use of standard epipolar geometries.
The first stimulus we generated was a traditional random-dot stereogram, where the
choice of camera geometry and filled disparity planes results in integral pixel shifts. This
example also contains no partially transparent pixels. Figure 3 shows the results on this
stimulus. The first eight columns are the eight disparity planes in (x, y, d) space, showing
the estimated colors and opacities (smaller opacities are shown as lighter colors, since the
RGBA colors are composited over a white background). The ninth and tenth column are two
re-synthesized views (leftmost and middle). The last column is the re-synthesized middle
view with a synthetic light-gray square inserted at disparity d = 3.
As we can see in Figure 3, the basic iterative aggregation algorithm results in a "perfect"
reconstruction, although only one pixel is chosen in each column. For this reason, the re-synthesized
leftmost view (ninth column) contains a large "gap".
Figure 3b shows the results of using only the first C 1 term in our cost function, i.e., only
matching re-synthesized views with input images. The re-synthesized view in column nine is
now much better, although we see that a bit of the background has bled into the foreground
layers, and that the pixels near the depth discontinuity are spread over several disparities.
Adding the smoothness constraint C 2 (Figure 3c) ameliorates both of these problems.
Adding the (weak) 0/1 opacity constraint C 3 (Figure 3d-e) further removes stray pixels
at wrong disparity levels. Figure 3d shows a "softer" variant of the opacity constraint
being filled in, but the re-synthesized views are
very good. Figure 3e shows a "harder" constraint
adjacent to initial estimates are filled in, at the cost of a gap in some re-synthesized views.
For comparison, Figure 3f shows the results of a traditional winner-take-all algorithm (the
same as Figure 3a with a very large # min and no occluded pixel removal). We can clearly see
the e#ects of background colors being pulled into the foreground layer, as well as increased
errors in the occluded regions.
Figure
3: Traditional synthetic RDS results: (a) after iterative aggregation but before
gradient descent, (b) without smoothness or opacity constraint, #
(c) without opacity constraint, # (d) with all three constraints,
simple winner-take-all (shown for comparison). The first eight columns are the disparity
layers, 7. The ninth and tenth columns are re-synthesized sample views. The last
column is a re-synthesized view with a synthetic gray square inserted at disparity d = 3.
Figure
4: More challenging synthetic RDS results: (a) after iterative aggregation but before
gradient descent, (b) without smoothness or opacity constraint, #
simple winner-take-all (shown for comparison). The first eight columns are
the disparity layers, 7. The ninth and tenth columns are re-synthesized sample
views. The last column is the re-synthesized view with a synthetic gray square inserted at
Our second set of experiments uses the same synthetic stereo dataset as shown in Figures
1 and 2. Here, because the background layer is at an odd disparity, we get significant re-sampling
errors (because we currently use bilinear interpolation) and mixed pixels. The
stimulus also has partially transparent pixels along the edge of the top half-circle in the
foreground shape. This stereo dataset is significantly more di#cult to match than previous
random-dot stereograms.
Figure 4a shows the results of applying only our iterative aggregation algorithm, without
any energy minimization. The set of estimated disparities are insu#cient to completely
reconstruct the input images (this could be changed by adjusting the thresholds # min and
several pixels are incorrectly assigned to the layer (due to di#culties in
disambiguating depths in partially occluded regions).
Figure 4b shows the results of using only the first C 1 term in our cost function, i.e., only
matching re-synthesized views with input images. The re-synthesized view in column nine is
now much better, although we see that a bit of the background has bled into the foreground
layers, and that the pixels near the depth discontinuity are spread over several disparities.
Adding the smoothness constraint C 2 (Figure 4c) ameliorates both of these problems.
Adding the (weak) 0/1 opacity constraint C 3 (Figure 4d) further removes stray pixels at
wrong disparity levels, but at the cost of an incompletely reconstructed image (this is less
of a problem if the foreground is being layered on a synthetic background, as in the last
column). As before, Figure 4e shows the results of a traditional winner-take-all algorithm.
Figure
5 shows the results on a cropped portion of the SRI Trees multibaseline stereo
dataset. A small region (64 - 64 pixels) was selected in order to better visualize pixel-level
errors. While the overall reconstruction is somewhat noisy, the final reconstruction with a
synthetic blue layer inserted shows that the algorithm has done a reasonable job of assigning
pixel depths and computing partial transparencies near the tree boundaries.
From these examples, it is apparent that the algorithm is currently sensitive to the choice
of parameters used to control both the initial aggregation stage and the energy minimization
phase. Setting these parameters automatically will be an important area for further research.
(a) (b) (c) (d) (e) (f) (g)
Figure
5: Real image example: (a) cropped subimage from SRI Trees data set, (b) depth
map after initial aggregation stage, (c-l) disparity layers re-synthesized input
image, (n) with inserted
While our preliminary experimental results are encouraging, the simultaneous recovery of
accurate depth, color, and opacity estimates remains a challenging problem. Traditional
stereo algorithms search for a unique disparity value at each pixel in a given reference image.
Our approach, on the other hand, is to recover a sparsely populated volume of colors and
opacities. This has the advantage of correctly modeling mixed pixels and occlusion e#ects,
and allows us to merge images from very disparate points of view. Unfortunately, it also
makes the estimation problem much more di#cult, since the number of free parameters often
exceeds the number of measurements, hence necessitating smoothness constraints and other
prior models.
Partially occluded areas are problematic because very few samples may be available to
disambiguate depth. A more careful analysis of the interaction between the measurement,
smoothness, and opacity constraints will be required to solve this problem. Other problems
occur near depth discontinuities, and in general near rapid intensity (albedo) changes, where
the scatter in color samples may be large because of re-sampling errors. Better imaging and
sensor models, or perhaps working on a higher resolution image grid, might be required to
solve these problems.
8.1 Future work
There are many additional topics related to transparent stereo and matting which we plan to
investigate. For example, we plan to try our algorithm on data sets with true transparency
(not just mixed pixels), such as traditional transparent random dot stereograms [Pra85,
Wei89] and reflections in windows [BBHP92].
Estimating disparities to sub-integer precision should improve the quality of our recon-
structions. Such fractional disparity estimates can be obtained by interpolating a variance
vs. disparity curve #(d), e.g., by fitting a parabola to the lowest variance and its two
neighbors [TH86, MSK89]. Alternatively, we can linearly interpolate individual color errors
c(x, y, d, y, d) between disparity levels, and find the minimum of the summed
squared error (which will be a quadratic function of the fractional disparity).
Instead of representing our color volume - c(x, y, d) using colors pre-multiplied by their
opacities [Bli94a], we could keep these quantities separate. Thus, colors could "bleed" into
areas which are transparent, which may be a more natural representation for color smoothness
(e.g., for surfaces with small holes). Di#erent color representations such as hue, satura-
tion, intensity (HSV) may also be more suitable for performing correspondence [GB95], and
they would permit us to reason more directly about underlying physical processes (shadows,
shading,etc.
We plan to investigate the relationship of our new disparity space model to more traditional
layered motion models [BBHP92, SM91b, SM91a, DP91, JBJ96, SA96]. We also
plan to make more principled use of robust statistics, and investigate alternative search algorithms
such as multiresolution (pyramidal) continuation methods and stochastic (Monte
Carlo) gradient descent techniques.
9 Conclusions
In this paper, we have developed a new framework for simultaneously recovering disparities,
colors, and opacities from multiple images. This framework enables us to deal with many
commonly occurring problems in stereo matching, such as partially occluded regions and
pixels which contain mixtures of foreground and background colors. Furthermore, it promises
to deliver better quality (sub-pixel accurate) color and opacity estimates, which can be used
for foreground object extraction and mixing live and synthetic imagery.
To set the problem in as general a framework as possible, we have introduced the notion
of a virtual camera which defines a generalized disparity space, which can be any regular
projective sampling of 3-D. We represent the output of our algorithm as a collection of color
and opacity values lying on this sampled grid. Any input image can (in principle) be re-synthesized
by warping each disparity layer using a simple homography and compositing the
images. This representation can support a much wider range of synthetic viewpoints in view
interpolation applications than a single texture-mapped depth image.
To solve the correspondence problem, we first compute mean and variance estimates at
each cell in our (x, y, d) grid. We then pick a subset of the cells which are likely to lie on the
reconstructed surface using a thresholded winner-take-all scheme. The mean and variance
estimates are then refined by removing from consideration cells which are in the occluded
(shadow) region of each current surface element, and this process is repeated.
Starting from this rough estimate, we formulate an energy minimization problem consisting
of an input matching criterion, a smoothness criterion, and a prior on likely opacities.
This criterion is then minimized using an iterative preconditioned gradient descent algorithm.
While our preliminary experimental results look encouraging, there remains much work
to be done in developing truly accurate and robust correspondence algorithms. We believe
that the development of such algorithms will be crucial in promoting a wider use of stereo-based
imaging in novel applications such as special e#ects, virtual reality modeling, and
virtual studio productions.
--R
Automated stereo perception.
A virtual studio for live broadcasting: The Mona Lisa project.
Edge based stereo correlation.
Stochastic stereo matching over scale.
A three-frame algorithm for estimating two-component image motion
Computational stereo.
Compositing
Compositing
A Bayesian treatment of the stereo correspondence problem using half-occluded regions
A space-sweep approach to true multi-image matching
A maximum likelihood n-camera stereo algorithm
Structure from stereo-a review
Robust estimation of a multi-layered motion represen- tation
A parallel stereo algorithm that produces dense depth maps and preserves image features.
Motion from color.
The lumigraph.
In Second European Conference on Computer Vision (ECCV'92)
Robust Statistics.
Skin and bones: Multi-layer
Techniques for disparity measure- ment
A computational framework for determining stereo correspondence from a set of linear spatial filters.
A stereo machine for video-rate dense depth mapping and its new applications
A stereo matching algorithm with an adaptive win- dow: Theory and experiment
A multibaseline stereo system with active illumination and real-time image acquisition
An iterative image registration technique with an application in stereo vision.
Fast Volume
Plenoptic modeling: An image-based rendering system
Cooperative computation of stereo disparity.
Kalman filter-based algorithms for estimating depth from image sequences
Human assisted key extrac- tion
Stereo by intra- and inter-scanline search using dynamic pro- gramming
A multiple baseline stereo.
Numerical Recipes in C: The Art of Scientific Computing.
PMF: A stereo correspondence algorithm using a disparity gradient limit.
Detection of binocular disparities.
Image Processing for Broadcast and Video Production
Prediction of correlation errors in stereo-pair images
Compact representation of videos through dominant multiple motion estimation.
Blue screen matting.
Photorealistic scene reconstrcution by space coloring.
Solving random-dot stereograms using the heat equation
Direct methods for visual scene reconstruction.
Principle of superposition: A common computational frame-work for analysis of multiple motion
A unified computational theory of motion transparency and motion boundaries based on eigenenergy analysis.
Stereo matching with non-linear di#usion
Algorithms for subpixel registration.
Traveling matte composite photography.
Perception of multiple transparent planes in stereo vision.
A generalized ordering constraint for stereo correspon- dence
"voting"
--TR
Algorithms for subpixel registration
Efficient ray tracing of volume data
Techniques for disparity measurement
A Three-Frame Algorithm for Estimating Two-Component Image Motion
Numerical recipes in C (2nd ed.)
Fast volume rendering using a shear-warp factorization of the viewing transformation
Disparity-space images and large occlusion stereo
Plenoptic modeling
AutoKey
Compact Representations of Videos Through Dominant and Multiple Motion Estimation
The lumigraph
Blue screen matting
Layered depth images
Computational Stereo
Compositing, Part 1
Compositing, Part 2
A Multiple-Baseline Stereo
A Stereo Matching Algorithm with an Adaptive Window
A Computational Framework for Determining Stereo Correspondence from a Set of Linear Spatial Filters
Occlusions and Binocular Stereo
Photorealistic Scene Reconstruction by Voxel Coloring
A Unified Mixture Framework for Motion Segmentation
Skin and Bones
A Stereo Machine for Video-Rate Dense Depth Mapping and Its New Applications
Stereo Matching with Non-Linear Diffusion
A Space-Sweep Approach to True Multi-Image Matching
A Layered Approach to Stereo Reconstruction
Compositing digital images
Direct Method for Visual Scene Reconstruction
A multibaseline stereo system with active illumination and real-time image acquisition
--CTR
Steven Seitz , Richard Szeliski, From the guest editors, ACM SIGGRAPH Computer Graphics, v.33 n.4, p.35-37, Nov.2000
Yen-Hsiang Fang , Hong-Long Chou , Zen Chen, 3D shape recovery of complex objects from multiple silhouette images, Pattern Recognition Letters, v.24 n.9-10, p.1279-1293, 01 June
C. Lawrence Zitnick , Sing Bing Kang , Matthew Uyttendaele , Simon Winder , Richard Szeliski, High-quality video view interpolation using a layered representation, ACM Transactions on Graphics (TOG), v.23 n.3, August 2004
Antonio Criminisi , Sing Bing Kang , Rahul Swaminathan , Richard Szeliski , P. Anandan, Extracting layers and analyzing their specular properties using epipolar-plane-image analysis, Computer Vision and Image Understanding, v.97 n.1, p.51-85, January 2005
Yin Li , Heung-Yeung Shum , Chi-Keung Tang , Richard Szeliski, Stereo Reconstruction from Multiperspective Panoramas, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.1, p.45-62, January 2004
Mi-Suen Lee , Grard Medioni , Philippos Mordohai, Inference of Segmented Overlapping Surfaces from Binocular Stereo, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.6, p.824-837, June 2002
Silvio Savarese , Marco Andreetto , Holly Rushmeier , Fausto Bernardini , Pietro Perona, 3D Reconstruction by Shadow Carving: Theory and Practical Evaluation, International Journal of Computer Vision, v.71 n.3, p.305-336, March 2007
Evren mre , Sebastian Knorr , Burak zkalayc , Uur Topay , A. Aydn Alatan , Thomas Sikora, Towards 3-D scene reconstruction from broadcast video, Image Communication, v.22 n.2, p.108-126, February, 2007
C. Lawrence Zitnick , Sing Bing Kang, Stereo for Image-Based Rendering using Image Over-Segmentation, International Journal of Computer Vision, v.75 n.1, p.49-65, October 2007
Sing Bing Kang , Richard Szeliski, Extracting View-Dependent Depth Maps from a Collection of Images, International Journal of Computer Vision, v.58 n.2, p.139-163, July 2004
Daniel Scharstein , Richard Szeliski, A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms, International Journal of Computer Vision, v.47 n.1-3, p.7-42, April-June 2002 | transparency;matting problem;stereo correspondence;3D representations;occlusions;3D reconstruction |
331525 | Automatically Checking an Implementation against Its Formal Specification. | AbstractWe propose checking the execution of an abstract data type's imperative implementation against its algebraic specification. An explicit mapping from implementation states to abstract values is added to the imperative code. The form of specification allows mechanical checking of desirable properties such as consistency and completeness, particularly when operations are added incrementally to the data type. During unit testing, the specification serves as a test oracle. Any variance between computed and specified values is automatically detected. When the module is made part of some application, the checking can be removed, or may remain in place for further validating the implementation. The specification, executed by rewriting, can be thought of as itself an implementation with maximum design diversity, and the validation as a form of multiversion-programming comparison. | Introduction
Encapsulated data abstractions, also called abstract data types (ADTs), are the most promising
programming-language idea to support software engineering. The ADT is the basis for the "informa-
tion-hiding" design philosophy [50] that makes software easier to analyze and understand, and that
can hope to support maintenance and reuse. There are several formal specification techniques for
ADTs [30], a growing number of language implementations of the idea [27], and accepted theories
of ADT correctness [20, 26, 28]. The ADT is a good setting for work on unit testing and testability
[34].
However, for all the ADT's promise, fundamental problems remain concerning ADTs, their
specifications, and implementations. In this paper we address the problem of checking agreement
between an ADT's formal specification and its implementation. We use a particular kind of equational
specification (as rewrite rules) [4], and the C ++ implementation language [54]. The latter
choice is not crucial to this work-almost any ADT programming language, e.g., Ada or Java,
would work as well. We do use special properties of one kind of rewrite specification, which would
make it more difficult to substitute a different kind of specification part. We show how to write C ++
classes and their formal specifications so that the implementation is automatically checked against
the specification during execution. Thus the specification serves as an "effective oracle," and in
any application the self-checking ADT cannot deviate from specified behavior without the failure
being detected.
Because the specification oracle may be inefficient in comparison to the C ++ implementation, its
use may be confined to prototyping and the testing phase of software development. However, for
some applications the specification and implementation may be viewed as independent "versions"
of the same software, which continually check each other. Since both were produced by human
beings, both are subject to error, but their utterly different form and content suggest that there is
minimum chance of a common-mode failure [43].
The central idea that allows self checking is to implement, as part of the C ++ code, the mapping
between concrete implementation states and abstract specification objects. Failure to mechanically
capture this important part of design is a weakness of all existing ADT systems.
Self-checking ADTs
We describe ADTs that check their implementations against specifications at run time, and give a
simple illustrative example.
2.1 Automatic Testing
The implementation side of our system is available "off the shelf." We use C ++ , but any language
that supports ADTs, such as Ada, Eiffel, Java, Smalltalk, etc. would do as well (we make no use
of the "inheritance" part of an object-oriented language). One component of our scheme is thus a
class implementation. (Whether the implementation is thought of as arising from an intuitive
set of requirements, or from a formal specification that is the second component of our scheme, is
immaterial to this description; however, we discuss the issue in section 5.2.)
The second component we require is a formal specification of the axiomatic variety. Here we do
not have so much leeway, because the specification form determines our ability to mechanically check
for specification properties like the consistency of a newly added operation, and plays an essential
role in efficiently checking for abstract equality when the specification serves as an implementation
oracle. We choose to use a rewrite system restricted so that the desirable properties of confluence
and termination can be largely obtained from the syntactic forms [4]. It would be possible to employ
more general specifications, but at the cost of using more powerful (and less efficient) theorem
provers. The limited goal of our scheme argues against this generality and loss of efficiency. We
have made the engineering decision to use a specification fitted to the role of automatic oracle.
The user of our system must supply one additional component, of central importance in our
scheme: a "representation" mapping 1 between the concrete data structures of C ++ instance vari-
ables, and the abstractions of the specification. It is a major weakness of present ADT theory
that the representation mapping is nowhere explicit. The existing theory is framed so that the
implementation is correct if there exists an appropriate representation [22]. But in practice, the
implementor must have a representation in mind. That there is no way to formally record and
maintain this early, crucial design decision is a flaw in all existing ADT design methodologies.
Having written an axiomatic specification, a C ++ class, and an explicit representation mapping,
the user may now test the composite ADT using any unit-test technique. For example, a conventional
driver and code coverage tool could be used to ensure that the C ++ code has been adequately
tested according to (say) a dataflow criterion [52]. Or, tests could be generated to exercise traces
of class operations [34]. Whatever techniques are used, our system will serve as the automatic
test oracle that all existing testing systems lack. It determines the correctness of each operation
invoked, according to the specification. Alternately, the user might decide to test the ADT "in
place," by writing application code using it, and conducting a system test (perhaps using randomly
generated inputs according to an expected operational profile [47]) on the application program with
embedded ADT. During this test, the ADT cannot fail without detection.
2.2 Example: A Small Integer Set
To illustrate our ideas, we use the class "small set of integer" (first used by Hoare [33] to discuss
the theory of data abstraction). The signature for this ADT is shown in figure 1 (from Stroustrup
[54]).
The signature diagram captures the complete syntax of ADT usage. For example, in figure 1
empty takes two arguments, an elem and a nat, and the operations for these types are also shown
in the diagram.
2.2.1 The Specification
The semantics of an ADT can be specified with a set of equations (also called axioms) expressed
in the names of its operations. For example, the intuitive axioms for the insertion operation on a
set are:
These equations determine which combinations of operations are equivalent, and along with an
assumption about combinations not handled by the equations, determine exactly which objects
This name was used by Hoare in his foundational paper [33]. Perhaps "abstraction mapping" is the more common
name, which also better expresses the direction.
set nat
elem
bool
maxsize
cardinality
member
empty
insert
true
false
succ
Figure
1: Signature of a generic, bounded ADT set.
(expressed as ground terms using the operations) consitute the set which forms the ADT. The
most common assumption is the initial one [26]: that any objects which cannot be proved equal
using the equations are distinct. A good deal of work has been done on algebraic specification; see
[12].
When equations are viewed as rewrite rules, the proofs of equivalences are simplified. In this view
care must be taken that a rewrite system is a correct implementation of an algebraic specification.
For this purpose, it suffices to consider ground-convergent (i.e., confluent and terminating) systems.
The above equations will not do as rewrite rules, because the first rule can be applied infinitely often.
A suitable rewrite system can be obtained from an equational specification by completion [41],
although the procedure is not guaranteed to terminate. Alternately, the form of the equations can
be suitably restricted [4], which is the approach chosen here.
Furthermore, most implementations of sets will impose bounds on both set size and the values
of the set's elements, as the type intset given by Stroustrup [54, x5.3.2] does. The simple equations
above do not describe these "practical" set ADTs.
We specify the type intset, from our understanding of the code in [54], using a notation similar
to those of several other "algebraic" specifications, such as Act One [20], Asf [11] or Larch [29],
and explain only those parts needed to understand the example. User-defined sorts are specified by
an enumeration of constructors each with its arity and parameter types, followed by axioms. An
axiom is a rewrite rule of the form
where l and r are respectively the left and right sides of the rule and c is an optional condition [39]-a
guard for the application of the rule. The symbol "?" on the right side of a rule denotes an exceptional
result. Its semantics should be formalized within the framework of order sorted algebras [25].
For our more modest purposes, "?" denotes a computation that must be aborted. Following Prolog
conventions [15] the identifier of a variable begins with an upper case letter and the underscore
symbol denotes an anonymous variable.
The ADT intset of [54] is specified as follows:
intset
constructor
maxsize and element upper bound
element to set
axiom
!= not member(E,insert(F,S))
and cardinality(insert(F,S)) ?= maxsize(insert(F,S))
!= not member(E,insert(F,S))
and
The operations member, cardinality, and maxsize will be axiomatized shortly. We rely on the
reader's intuition of these concepts to explain the above axioms. The style used in these axioms
handles exceptional cases with a "?" axiom, guarded by a constraint defining the exceptional
condition. Thus in the empty axiom the exception occurs only if the parameters are inconsistent.
Similarly, the first axiom for insert handles the error case in which an attempt is made to insert
an element that violates the upper-bound restriction; and, the third axiom for insert handles an
attempt to insert a new element into a set that is already at maximum size. The second and fourth
insert axioms establish that the normal form for a nest of insertions is in element order, without
duplicates.
The last three axioms of insert create overlays, i.e., critical pairs overlapping at the root. We
require the conditions of overlay axioms to be mutually exclusive, so that the overlays created are
vacuously joinable, and consequently [18] the system is confluent.
We make specifications grow incrementally adopting the "stepwise specification by extension"
approach [20]. Each increment adds new operations to a specification. The new specification is a
complete and consistent extension of the old one. We adopt two design strategies [4] to guarantee
completeness and consistency, as follow:
The binary choice strategy generates a set of complete and mutually exclusive arguments
for an operation. Once we have a left side of a rule we define a set of right sides such that the
set of conditions associated with the right sides are mutually exclusive.
The recursive reduction strategy uses a mechanism similar to primitive recursion, but more
expressive, for defining the right side of a rule in a way which ensures termination. The symbol
"!" in the right side of a rule stands for the term obtained from the left side by replacing,
roughly speaking, recursive constructors with their recursive arguments. For example, "!" in
the axioms of cardinality below stands for cardinality(S).
The promised axioms for cardinality, maxsize and member are as follows:
operation cardinality(intset) -? integer
axiom
operation maxsize(intset) -? integer
axiom
operation member(integer,intset) -? boolean
axiom
member(-,empty(-? false
This completes the example specification of intset.
2.2.2 Implementations of the Specification
We consider three implementations of the above specification. They are referred to as a by-hand
implementation, a direct implementation, and a self-checking implementation. A by-hand implementation
is C ++ code written by a programmer to provide the functionality expressed by the
specification. This code is naturally structured as a C ++ class in which operations are implemented
as class member functions. A by-hand implementation of intset appears as the first example in
Stroustrup's text [54, x5.3.2]. The direct implementation [30] is C ++ code automatically generated
from the specification by representing instances of abstract data types as terms, and manipulating
those terms according to the rewrite rules. The self-checking implementation is the union of the
by-hand implementation and the direct implementation with some additional C ++ code to check
their mutual agreement.
We describe the self-checking implementation first, even though it uses the direct implementa-
tion. This presentation order motivates the need for the direct implementation before its considerable
detail is given.
The Self-checking Implementation
As described in the following section, the direct implementation provides the mechanism, in C ++ ,
for computing normal-form terms corresponding to any sequence of specification operations. The
by-hand implementation provides a similar mechanism for computing a result using any sequence of
its member-function calls. These two computations correspond respectively to the upper (abstract)
and lower (concrete) mappings in diagrams such as that displayed in figure 2.
In the abstract world, 2 is the binary relation for set membership. In the concrete world,
member is an operation that transforms the values of state variables. If starting at the lower left
of the diagram and passing to the upper right by the two possible paths always yields the same
result, we say that the diagram commutes. The concrete implementation in a commuting diagram
is by definition correct according to the abstract specification there. In figure 2, suppose that the
boolean result returned by member is m(x; S) where x is an integer value and S is an intset
value. (That is, m is the function computed by member.) Then the diagram commutes iff 2
set
elem 2 bool
State State
member
Figure
2: Commuting diagram for the member operation of the ADT set.
To check the by-hand result against the direct result only requires that code be available for
the representation function, to complete a diagram such as figure 2. The self-checking implementation
comprises the C ++ code of both by-hand and direct implementations, plus a representation
function, and appropriate calls connecting them. The locus of control is placed in the by-hand
implementation. When its member functions are invoked, corresponding normal-form terms are
requested from the direct implementation. The comparison of results, however, takes place in the
abstract world; what is actually compared are the normal forms computed there.
The self-checking implementation for intset illustrates this structure. A self-checking class has
two additional private entities declared first, in the example of type absset, which is the type mark
for a set in the direct implementation. The additional variable abstract contains values of sets from
the direct implementation. The additional function conc2abstr is the representation mapping; it
takes as input parameters the instance variables of intset and returns the corresponding absset
instance.
// Declaration of the self-checking intset class.
class intset -
absset abstract; // abstract version of this class
absset concr2abstr(); // representation function
// Below this line the class is identical to Stroustrup, p. 146ff
int cursize, maxsize;
int *x;
public:
intset(int m, int n); // at most m ints in 1.n
. // (most of the code omitted)
Member functions of the self-checking implementation differ from the corresponding ones in the
by-hand implementation only by the addition of two statements just before each function return.
2 It is more precise to think of a state oe as a mapping from instance-variable names to their values. Then in
any diagram with abstract operation F and concrete operation f , if the variables of the state are x1 ; x2 ; :::; xn , the
diagram commutes iff
For example, the self-checking member function implementing the specification operation empty
follows:
intset::intset(int m, int n) // at most m ints in 1.n
new int[maxsize];
Additional statements for self-checking
abstract value
verify; // check mutual agreement
The direct-implementation function empty is called and its result-a normal form encoded in the
data structures of the direct implementation-saved in the added variable abstract. (The code
for empty appears in the following section.)
The macro verify performs an equality test between the value stored in abstract and that
computed by conc2abstr, which is also a normal-form value. This equality test is particularly
simple because in the direct implementation equality means identity of normal forms. The verify
macro includes an error report if the two values differ. Its code follows.
#define verify "
if (!
cerr !! form("Direct differs from by-hand at line %d of '%s'.``n'', "
The last significant piece of code of the self-checking implementation is the representation
function. The mapping is straightforward, starting with an empty abstract set and adding elements
from the concrete version one at a time to calculate the corresponding abstract set.
absset intset::concr2abstr()
absset upper bound not implemented!
for (int
return h;
It was only when writing this function that the programmer noticed that the by-hand implementation
[54] pays no attention to the upper bound on element size, so MAXINT must be used for this
parameter. The implications of this omission are further discussed in 2.2.3.
The Direct Implementation
In the C ++ direct implementation a user-defined sort has a data structure that is a pointer to a
discriminated union. The discriminant values are tokens whose values stand for constructors of
the sort. Each arm of the union is a C ++ struct whose components represent the arguments of
the constructor associated with the discriminant. Dynamic polymorphism would be a more elegant
alternative, but less portable to other languages. In the example:
int elem; // kind of generic
enum tag - EMPTY, INSERT -; // tokens for the discriminants
struct set-node -
tag t; // constructor discriminant
union -
struct -
int m;
int associated to "empty"
struct -
elem
set-node* s; -1; // arm associated to "insert"
set-node(int m, int r) -
set-node(elem e, set-node* s) -
// Simple macro definitions to improve readability
#define tag-of(w) (w-?t)
#define maxsize-of(w) (w-0.m)
#define range-of(w) (w-0.r)
#define elem-of(w) (w-1.e)
#define set-of(w) (w-1.s)
// Declare a function for each signature symbol
extern absset empty(int m, int r);
extern absset insert(elem e, absset s);
extern int cardinality(absset s);
extern int maxsize(absset s);
extern bool member(elem e, absset s);
// equality-test function
extern bool equal(absset s1, absset s2); // normal-form (syntactic) equality
Constructors and operations are implemented by functions without side effects. The execution
of a function implementing a constructor dynamically allocates its associated union and returns a
pointer to it. Each function implementing a non-constructor consists of a nest of cases whose labels
correspond to the patterns in the rewrite rules. Rule conditions are implemented by conditional
statements. Since both the patterns and the conditions are mutually exclusive the order of execution
may affect the efficiency, but not the result, of a computation. The completeness of the patterns
implies that the execution of a function implementing an operation is bound to find a matching
rule and eventually to execute a call which represents the rule right side. Except for the case of "?"
the execution of this call generates a finite tree of calls whose leaves are always calls to constructor
functions and consequently an abstract representation of a sort instance is always returned. We
translate the condition of a rule with "?" as the right side by means of a macro exception very
similar to the macro assert provided by the GNU C ++ compiler, which we use in our project.
#define
cerr !! "Exception '" !! #ex "
!! form("' at line %d of `%s'."n", -LINE-FILE-)
Examples of the direct implementation of constructor functions and an operation function follow.
absset empty(int m, int r)
absset
absset insert(elem e, absset s)
absset h;
switch
case EMPTY:
new set-node(e,s);
break;
case INSERT:
int cardinality(absset s)
switch
case EMPTY: return 0;
case INSERT: if (member(elem-of(s),set-of(s))) return cardinality(set-of(s));
return
2.2.3 Executing the Small Integer Set
The execution of the self-checking implementation of intset raises some interesting issues about
the by-hand implementation in [54]. Although the documentation in the code seems to require an
upper bound for the value of an element, this constraint is not enforced in the by-hand implementa-
tion. The self-checking implementation detects the problem during testing and issues the following
warning:
Exception 'e ? range-of(s)' at line 41 of `absset.C'.
The message "e ? range-of(s)" is the textual code appearing in an exception macro in the
direct implementation of insert. As the comment there indicates, the exception implements a
violated condition in the axiom:
This problem is undetected during the same test of the code in [54].
The direct implementation includes the operation cardinality that has no corresponding member
function in the by-hand implementation of [54]. A naive programmer might add this observer
function by returning the value of cursize, the counter of elements stored in the array that represents
a set. However, the self-checking implementation would report a failure of such a cardinality
on any test where the by-hand implementation creates a set with insert of duplicate elements.
What the naive programmer missed is that the by-hand implementation in fact stores duplicates
in its array. The specification we wrote for cardinality does not have this unexpected behavior,
and would thus catch the mistake in the naive cardinality implementation.
The small-integer-set example illustrates the benefits gained from a formal specification. Direct
implementation of the specification provides a careful check on a by-hand implementation, allowing
self-checking of test values. Of course, it requires additional effort to write the specification; it can
be argued, however, that without a formal specification, correct code is impossible to write. We
have seen two examples of this in a well understood class of a textbook example.
2.2.4 An example of self-checking
An example will make clear the way in which the explicit representation function allows the results
computed by the by-hand implementation to be checked against those specified by the direct im-
plementation. Consider a previously created intset containing elements 1 and 5. Perhaps this set
was created with the C ++ code:
intset Ex(6,MAXINT);
Then it has already been checked that the state defined by the instance variables, including the
concrete array in which the first two elements are 1 and 5, properly correspond to the term
insert(1,insert(5,empty(6,MAXINT))),
which is the normal form assigned to this state by the representation function.
Now suppose that the element 2 is added to this set, perhaps by the C ++ call Ex.insert(2).
Figure
3 shows the particular case of the commuting diagram that checks this computation. At
the lower level are the instance variables that comprise the concrete state, with the initial value at
concr2abstr
concr2abstr
Figure
3: Commuting diagram for inserting 2 into the Ex intset.
the left. The member function insert in the by-hand implementation transforms these variables
as shown when called with arguments Ex and 2. At the upper level are the corresponding abstract
values, transformed by rewriting in the direct implementation. The explicit representation mapping
concr2abstr() connects the two levels. From the concrete instance variables it constructs the
abstract values, and the comparison that shows the computation to be correct occurs when the
abstract value
is obtained by the two paths around the diagram starting at the lower left and ending at the upper
right.
Of course, what is actually compared in the self-checking is not "abstract values," but bit
patterns of a computer state, a state created by the compiled C ++ program for the direct imple-
mentation. However, these states are so transparently like the structure of the abstract terms, as
words in a word algebra, that it is obvious that they properly correspond. It is impossible to do
better than this in any mechanical way. Mechanical comparisons must be done on computer states,
not the true abstractions that exist only in a mathematical universe, and the best we can do is to
make the states be simple, faithful mirrors of this universe.
3 A Proposed Automatic Testing System
The example of section 2.2 begins with two human-created objects: a specification and a by-hand
implementation. We constructed the self-checking implementation from these by adding a few lines
to the by-hand implementation, lines that call a direct implementation of the specification. We
now consider how to automate the construction of these additional elements in the self-checking
implementation. Mechanical creation of the self-checking implementation helps to justify the extra
effort needed to create independent specifications and by-hand implementations.
3.1 Automating the Direct Implementation
The direct implementation of the specification is nothing else than the implementation of a term
rewriting system. Implementations of this kind are numerous and often add extra features to
rewriting. For example, the Equational Interpreter [49] adds total laziness, Obj3 [27] adds a
sophisticated module system, and SbReve [2] adds a Knuth-Bendix completion procedure. These
are stand-alone systems; in contrast, our direct implementation must appear as a block of C ++ code
to be integrated with the by-hand implementation.
A data representation and the implementation of rewrite rules have been discussed in detail in
section 2.2.2, and it is not difficult to "compile" the appropriate C ++ code from the specification
using compiler-compiler techniques, such as those of the Unix system [38, 42] or some other envi-
ronment, e.g., Standard ML of New Jersey [6, 7]. The most difficult part of the compilation will be
the "semantic" processing to guarantee that the specification rewrite rules possess the termination
and confluence properties that make the direct implementation work. Wherever possible, we will
try to convert semantic properties to syntax properties so that they can be statically checked. For
example, by expressing rewrite rule conditions in an if . then . else form, the mutual exclusion
sufficient to make overlays joinable would be guaranteed by the syntax.
The "object-oriented" reader will have noticed that section 2.2.2 uses a functional-like style
instead of an object-oriented one. Initially we made each abstract object a C ++ class, but changed
because: (1) "functional" code is natural for this application, since abstract objects have no internal
state; and (2) the encapsulation protection offered by a C ++ class is wasted in this case, since the
direct-implementation code is created by a compiler and not for human use. On the other hand,
the modularity of C ++ can be used to good advantage.
3.2 Automating Calls on the Direct Implementation
The additional statements that must be added to the by-hand implementation are few, and present
no difficulties. One way is to write a preprocessor from C ++ into C ++ that effects their addition.
Using a C ++ grammar that omits most of the language's detail, with a compiler compiler that simply
copies most code through directly, is one way to write the preprocessor quickly [9]. A second idea
takes advantage of the existence of the parser in an existing C ++ compiler. It is very easy to modify
the code generator to insert object code for the necessary calls [32]. These ideas converge if the
C ++ compiler is itself more of a preprocessor (into C, say) than a true compiler. In the easiest case,
such a preprocessor might be itself written using a compiler compiler.
There are a number of technical problems in modifying the by-hand implementation. For
example, the abstract and concrete worlds share built-in types like Boolean, and for operations
returning these types the representation function is identity. Thus the machinery of abstract
and verify is not needed, and the inserted calls take a simpler form. A slightly more difficult
problem arises for by-hand implementations that are not functional, such as insert. The usual
implementation uses update in place, as in [54], so the abstact operation has a different arity than
the concrete. Thus slightly different code is required:
void intset::insert(int t)
// Code from Stroustrup, section 3.2
(++cursize ? maxsize) error("too many elements");
int
while (i?0 && x[i-1]?x[i]) -
int
// Self-checking added
3.3 The Representation Function
There remains only the representation function that maps between the concrete and abstract do-
mains, the function named concr2abstr in section 2.2.2. There seems to be no way that essential
parts of this function can be automated. The correspondence between concrete and abstract objects
is a primary design decision made early on in the by-hand implementation, and the designer
is not constrained in its choice. Furthermore, it is crucial to the proper working of the system
we propose that the representation correctly capture the link between concrete and abstract. To
take an extreme example, if the programmer codes a representation that maps all inputs to a (sort
overloaded) constant, then all the equality checks in the verify macro will trivially succeed, and
no errors can be caught.
It can be argued that the extra programming required to code the representation function is a
blessing in disguise. Unless the programmer has a detailed and accurate idea of this function, it is
impossible to write correct functions that implement the specification's operations. What better
way to force this understanding than to insist that it be put into code? What better way to protect
against changes that are inconsistent with the representation than to make use of its code? (Both
of these issues arose in the example above, section 2.2.3.) There is even an answer to the possibility
that an incorrect representation function will trivialize self checking. Programmers are more likely
to err in the direction of misguided elaboration than toward trivial simplicity. The more baroque
a representation is, the less likely it is to conceal its faults; putting in too much will lead to our
system reporting ersatz failures, not to false success.
It has been suggested [44] that very often the representation function is structurally similar to
a routine that pretty-prints a class value from its internal representation to human-readable form.
This insight again underscores how easy it is to code the representation function, and how essential
its capture is.
3.4 System Overview
Figure
4 shows the self-checking implementation that would result from the example in section 2.2.
The direct implementation is invoked from the by-hand implementation by additional code that
computes a term (abstract) in the data structure of the direct implementation, then applies the
representation function concr2abstr to map the implementation state to a term, and compares
these (verify macro).
self-checking implementation
struct setnode.
Direct implementation
class intset f
int cursize,.
intset::intset(int m, .
By-hand implementation
selfcheck;
absset intset::concr2abstr()
absset
Representation function
Sort intset
constructor
empty(.
intset(.
Specification Hand Gen.
Auto. Gen.
Figure
4: Construction of the self-checking implementation.
4 Relation to Previous Work
Previous attempts to link formal specification to ADT implementation have taken a variety of
forms.
Proof systems. The correctness of data type representation is proved using diagrams such as that
presented in figure 2. The subroutine member tests the membership of an element in a set. A
program might represent each set as a fixed-size array of elements and a pointer to the last
element. Hoare [33] shows that the existence of a representation mapping, R that makes the
diagram commutative is proof of the implementation correctness. This mapping is somewhat
"retrospective", since the concrete implementation originates from the abstract world which
is the formalization of intuitive concepts.
Executable specifications. A specification is a non-procedural description of a computation. In
an algebraic specification this description takes the form of a set of equations. When equations
are given an orientation, i.e., are transformed into rewrite rules, the resulting structure, called
a rewrite system [39], allows us to "compute." An elementary step of computation consists
in rewriting some subterm of an expression by means of a rule. A computation is a sequence
of elementary steps. Often two fundamental properties are required: termination, i.e., any
computation sequence ends up in some element which cannot be further rewritten [17], and
confluence, i.e., the choice of which term to rewrite in an expression does not affect the result
of the computation [37]. Rewrite systems with these properties are the model of computation
underlying programming languages such as O'Donnell's Equational Interpreter [49] and
Automatic programming. If ADT specifications are viewed as a very-high-level programming
language (and the executable nature of axiomatic specifications supports this view), then
there is no need to write an implementation at all. The specification when executed is the
implementation. Thus questions of correctness do not arise, and the only difficulty lies in
improving the efficiency of execution. Antoy et al. [3, 5] investigate specification translation
into a number of different languages. Volpano [55] proposes to "compile" the specification
into an imperative language like C. Using ideas from functional languages like ML, he is able
to effect this compilation, although in some cases the efficiency does not approach what would
be obtained in an implementation by hand.
Testing systems. There appear to be three distinct threads in the attempt to use ADTs with
tests.
First, any proof technique can be used to instrument implementation code with run-time
assertions, which check test instances of proof assertions that could not be established by the
theorem prover. The GYPSY system [1] uses this technique.
Second, ADT specifications can be used to formalize the generation of specification-based tests
for an ADT implementation. Gerhart [24] describes a logic-programming approach to generate
test points according to any scheme the tester imposes. In a slightly different approach, an
ESPRIT project automatically generates tests based on traces (operation sequences) without
direction by the tester [23, 13, 14].
Third, the daists system [21] attempted to check consistency of an implementation and
algebraic ADT specification, by executing the concrete code corresponding to the two sides
of a specification axiom, and comparing the results with an implementation-supplied equality
function.
Anna [45] specifications for Ada are intended to cut across these categories, but work has progressed
less far than in the more specialized projects cited above.
Our approach might be described in these terms as a combination of a proof- and a testing
system. In contrast to the executable specification approach, we consider both formal specification
and independently devised implementation. (Perhaps both are derived from a common intuitive
description.) In contrast to the automatic programming approach, the implementation code is not
guaranteed to be correct by transformational origin; indeed, the implementation may be full of the
tricks that are the death of formal proof (but essential in practice for efficiency).
The view that specifications are their own implementation is attractive; for one thing, it cuts
the work of specification and implementation in half. However, its drawback is that it merely
moves the correctness problem up a level. However carefully a specification is devised, it may fail
to capture the intuitive ideas of problem solution that led to it, and that intuition necessarily exists
on a plane inaccessible to formal methods. Hence it may be wise to duplicate the work of problem
solution in very different specifications and implementations, both drawing on the intuitive problem
model. One may then hope that where the model is unclear or faulty, the independent attempts to
capture it formally will differ, and precise correctness methods will detect the difficulty and lead
to correction of the model.
Unlike those who use proof systems, we do not attempt to verify the implementation, but only
to check it for particular cases. (The return for this drastic restriction is a large improvement in the
success rate of the process, and a lowering of the skill level needed to use it.) Ours is a testing system
using some proof techniques, whose nearest precursor is daists [21]. Unlike daists, however, our
test for equality of ADT values is conducted in the abstract domain (hence the proof techniques)
rather than in the concrete domain. We thus avoid both practical and theoretical deficiencies that
could falsify a daists success, yet do not pay an efficiency penalty, because we use rewriting for the
abstract proofs, with an explicit mapping from the concrete to abstract domain. This mapping and
its expression as part of the implementation are our main contribution. The extra programming
required for the representation function corresponds to the need for the daists programmer to
explicitly code a concrete equality function, but is easier and more natural to satisfy.
Our system can also be viewed as checking runtime behavior using code assertions. Unlike
ad hoc systems [46] or proof-based systems such as GYPSY [1], however, these assertions are not
written by the user, even in conjunction with a theorem prover. Rather, they are automatically
generated, and are guaranteed to detect deviation from specifications.
We do not generate test data, nor judge the adequacy of test data, but any scheme that does
generate tests [35, 36] or measure test quality [16, 48] can be used with our system supplying
the test oracle, a facility that all testing systems presently lack. Frankl and Doong [19] describe
a system that uses rewriting to obtain one (abstract) test case from another, so that the results
of an implementation can be compared on these cases. Sankar [53] uses a much more powerful
rewriting theorem prover to attempt to prove abstract equality between all terms an implementation
generates. Antoy and Gannon [4] use rewrite systems similar to ours to prove the correctness of
loops and subtypes with the help of a theorem prover. All these systems are less straightforward
than ours, because they lack the explicit representation function and/or the specification restrictions
needed to guarantee rewriting termination.
Compared to automatic proof schemes for ADTs, and to automatic programming of efficient programs
from formal specifications, the goal for our testing system is modest. We imagine no more
than an automatically generated, perfect set of run-time assertions, which make it impossible for a
by-hand implementation to silently disagree with the specification. We can attain the limited goal
where more ambitious ones present formidable problems, but is it worthwhile? In this section we
try to answer that question in the affirmative.
5.1 The Need for Test Oracles
The testing literature almost universally assumes that test output is examined for correctness;
and, it almost universally fails to say how this can be done. Furthermore, research examples [51]
and empirical studies [10] alike show that it is common for testers to have failures in their hands,
yet ignore them. Thus the "effective oracle problem"-the difficulty of mechanically judging if
a program's output meets specifications-is an important one. It assumes extra importance for
random testing. Recent work suggests that true random testing based on a valid operational
profile is essential, and that confidence in such tests requires a vast number of test points [31]. The
adequacy criteria in widespread use require hundreds of points; adequate random testing requires
millions. Such tests are flatly impractical without an effective oracle.
Given the oracle, random testing is doubly attractive, however. Not only is it theoretically
valid, able to provide a true estimate of reliability, but it approximates the ideal of a "completely
automatic" test. The random inputs can be selected mechanically, and with a means of mechanically
examining outputs, a test suite can be run without the need for human intervention.
5.2 Multi-version Specification/Programming
When a specification is viewed as a program in a very high level language, yet no general algorithm
exists for compiling that language into efficient code, there is still a place for by-hand implementa-
tion. In this view, the development process is directed. First comes a requirements phase in which
the developers, in communication with the end users, attempt to create a formal specification that
captures the necessary intuitive problem solution. In this process the prototyping aspect (see section
5.3) of the specification formalism is of great importance. Next, the specification is efficiently
implemented, automatically where possible, by hand otherwise. Formal methods are used to show
that this implementation is correct. In practice, we believe that there will always be a need for
by-hand implementations, and that general methods of proof will always need to be supplemented
by tests. The system we have proposed can automate the testing process efficiently.
One can argue that when development proceeds from requirements to formal specification to
by-hand implementation, the declarative form of specification is not the best. Rather, a form of
specification much closer to the ultimate imperative implementation language is called for [56]. The
advantages are twofold: first, such procedural specifications are easier to write; and second, many
of the detailed problems of an imperative-program solution must be addressed in the prototype, so
that the by-hand implementation is easier, and less prone to introduce subtle disagreements with
the specification.
However, there is a rather different view of specification/implementation in program develop-
ment. In this view, both specification and implementation are imperfect reflections of intuitive
requirements for problem solution. This view is particularly appropriate in safety-critical applica-
tions. In an attempt to provide software fault tolerance, the technique of multi-version programming
(mvp) has been suggested [8]. However, it has been observed [40] that so-called "common-mode
failures" are more frequent than might be expected-working from the same (informal) specifica-
tion, independent programming teams make mistakes that lead to coincident wrong results. The
proposed solution is design diversity, that is, programs that differ so radically that they are truly
independent. A recent study [43] casts doubt on the whole idea of mvp, and in contrast suggests
that internal self-checking is more valuable, particularly when the checking involves the details of
internal program states.
The system we propose fits the needs of safety-critical applications very well. It is the ultimate in
self-checking code, and the checks are applied to internal data-structure states, through the explicit
representation function that maps those states into the direct implementation. At the same time,
because the direct implementation is executed, a self-checking implementation can be viewed as
a two-version programming package with ultimate design diversity. The declarative nature of the
axiomatic specification, and its direct execution by rewriting, should make common-mode failure
with a conventional by-hand implementation unlikely.
5.3 Rapid Prototyping
Previous systems implementing specifications directly, such as Obj3 [27], have been designed so
that specifications can be executed as prototypes, to allow potential users to interact with software
before a complete development effort freezes the wrong design. Our direct implementation adds a
new dimension to this idea. The by-hand implementation and the direct one implement the same
specification completely and independently. In our software development approach we use the
former for production, both for testing, and the latter for prototyping. Our two implementations
coexist in the same environment, that of the final product. In earlier systems prototypes are
confined to unusual software and/or hardware platforms (e.g., Obj3 lives inside Common Lisp).
Our prototype and production modules interact in very similar ways with the rest of the system.
From an external point of view, the only difference between the two versions of an operation is
that the direct implementation is side-effect free, while the by-hand implementation, for efficiency
reasons, might not be. This gap can be filled by a trivial interface limited to renaming operations
and rearranging parameters and modes of function declarations.
6 Summary and Future Work
We have proposed a modest testing system for modules using an algebraic specification executable
as rewrite rules. The programmer must write a specification, C ++ code to implement it, and a
representation function relating implementation data structures to abstract terms. From these three
elements a self-checking implementation can be constructed automatically, in which the specification
serves as a test oracle. The self-checking implementation can be viewed as a vehicle for testing only,
or as a special kind of two-version programming system with exceptional design diversity.
We are pursuing two quite different goals for the future. First, we are investigating more expressive
languages for the formal specification component of our approach and by-hand implementation
languages, such as Java, simpler than C ++ . Second, we want to use these ideas in a practical set-
ting, to learn more about the difficulty of writing specifications, and the value of a test oracle. The
ideal testbed is an industrial user of object-oriented design with a central group responsible for
developing highly reliable support software that other developers use. In such a group, it should
be worthwhile putting the extra effort into specification, in return for better testing and reliability.
--R
A language for specification and implementation of verifiable programs.
A term rewriting laboratory with (AC-)unfailing completion
Using term rewriting systsem to verify software.
A lexical analyzer generator for Standard ML.
ML-Yacc, version 2.0.
Fault tolerance by design diversity: concepts and experiments.
Prototype testing tools.
Comparing the effectiveness of software testing strategies.
Algebraic Specification.
Algebraic System Specification and Development.
Application of prolog to test sets generation for algebraic specifications.
Test data generation using a Prolog with constraints.
Programming in Prolog.
A formal notion of program-based test data adequacy
Termination. In RTA'85
Confluence of conditional term rewrite systems.
Case studies on testing object-oriented programs
Fundamentals of Algebraic Specification 1.
Data abstraction implementation
Theory of modules.
Generation of test data from algebraic specifications.
Test generation method using prolog.
Operational semantics of order-sorted algebras
An initial algebra approach to the specifi- cation
Introducing Obj3.
The algebraic specification of abstract data types.
The Larch family of specification languages.
The design of data type specifications.
Partition testing does not inspire confidence.
Testing programs with the aid of a compiler.
Proof of correctness of data representations.
Hardware testing and software ICs.
Module test case generation.
Methodology for the generation of program test data.
Confluent reductions: Abstract properties and applications to term-rewriting sys- tems
Yacc: yet another compiler compiler.
rewriting systems.
An experimental evaluation of the assumption of independence in multi-version programming
Simple word problems in universal algebras.
The use of self checks and voting in software detection: An empirical study.
Personal communication.
Programming with Specifications: An Introduction to ANNA
Software Reliability: Measurement
A comparison of some structural testing strategies.
Equational Logic as a Programming Language.
On the criteria to be used in decomposing systems into modules.
On the automated generation of program test data.
Selecting software test data using data flow information.
Software templates.
The operational versus the conventional approach to software development.
--TR
--CTR
Daniel Hoffman , Durga Prabhakar , Paul Strooper, Testing iptables, Proceedings of the conference of the Centre for Advanced Studies on Collaborative research, p.80-91, October 06-09, 2003, Toronto, Ontario, Canada
Dick Hamlet, When only random testing will do, Proceedings of the 1st international workshop on Random testing, July 20-20, 2006, Portland, Maine
Qing Xie , Atif M. Memon, Designing and comparing automated test oracles for GUI-based software applications, ACM Transactions on Software Engineering and Methodology (TOSEM), v.16 n.1, p.4-es, February 2007
Johannes Henkel , Amer Diwan, A Tool for Writing and Debugging Algebraic Specifications, Proceedings of the 26th International Conference on Software Engineering, p.449-458, May 23-28, 2004
David Owen , Dejan Desovski , Bojan Cukic, Random testing of formal software models and induced coverage, Proceedings of the 1st international workshop on Random testing, July 20-20, 2006, Portland, Maine
James H. Andrews , Susmita Haldar , Yong Lei , Felix Chun Hang Li, Tool support for randomized unit testing, Proceedings of the 1st international workshop on Random testing, July 20-20, 2006, Portland, Maine
Douglas Gregor , Sibylle Schupp, STLlint: lifting static checking from languages to libraries, SoftwarePractice & Experience, v.36 n.3, p.225-254, March 2006 | rewriting;formal specification;self-checking code;object-oriented software testing |
331607 | Parallel RAMs with owned global memory and deterministic context-free language recognition. | We identify and study a natural and frequently occurring subclass of Concurrent Read, Exclusive Write Parallel Random Access Machines (CREW-PRAMs). Called Concurrent Read, Owner Write, or CROW-PRAMS, these are machines in which each global memory location is assigned a unique owner processor, which is the only processor allowed to write into it. Considering the difficulties that would be involved in physically realizinga full CREW-PRAM model and demonstrate its stability under several definitional changes. Second, we precisely characterize the power of the CROW-PRAM by showing that the class of languages recognizable by it in time O(log n) (and implicity with a polynomial number of processors) is exactly the class LOGDCFL of languages log space reducible to deterministic context-free languages. Third, using the same basic machinery, we show that the recognition problem for deterministic context-free languages can be solved quickly on a deterministic auxilliary pushdown automation having random access to its input tape, a log n space work tape, and pushdown store of small maximum height. For example, time O(n1 + &egr;) is achievable with pushdown height O(log2 n). These result extend and unify work of von Braunmhl, Cook, Mehlhorn, and Verbeek, Klein and Reif; and Rytter. | under several definitional changes. Second, we precisely characterize
the power of the CROW-PRAM by showing that the class of languages
recognizable by it in time O(log n) is exactly the class LOGDCFL of
languages log space reducible to deterministic context free languages.
Third, using the same basic machinery, we show that the recognition
problem for deterministic context-free languages can be solved in time
O(n 1+ffl =S(n)) for any ffl ? 0 and any log 2 n - S(n) - n on a deterministic
auxiliary pushdown automaton having a log n space work tape,
pushdown store of maximum height S(n), and random access to its
input tape. These results extend and unify work of von Braunm-uhl,
Cook, Mehlhorn, and Verbeek; Klein and Reif; and Rytter.
1 Introduction and Related Work
There is now a fairly large body of literature on parallel random access machine
(PRAM) models and algorithms. There are nearly as many definitions
of this model as there are papers on the subject. All agree on the general
features of such models - there is a collection of more or less ordinary sequential
processors with private, local memories, that all have access to a
shared global memory. The model is synchronous - in each time unit, each
processor executes one instruction. There is much more diversity regarding
other features of the model. For example, there are differences as to whether
the model has single- or multiple-instruction streams, how many processors
there are, how they are numbered, how they are activated, what instruction
set they have, what input convention is used, and how simultaneous read
or write requests to a single global storage location are arbitrated. Most of
these variations make little or no difference in the power of the model.
Two features seem to have a substantial impact on the power of the
model. One is uniformity. In general, we only consider uniform models in
this paper, i.e., ones where a single program suffices for all input lengths,
and where a single processor is initially active, creating other processors
as desired. The second sensitive feature is arbitration of memory access
conflicts. Two main variants have been most intensively studied. Following
the nomenclature introduced by Vishkin [37], the CRCW (Concurrent-Read,
Concurrent-Write) PRAM allows memory access conflicts. All processors
reading a given location in a given step receive its value. Among all processors
CROW-PRAMs and DCFL Recognition 3
writing to a given location in a given step, one is allowed to succeed, e.g.,
the one with the lowest processor number. (Other resolution rules for write
conflicts have been proposed. All are known to be equivalent in power up
to constant factors in running time, and polynomial factors in number of
processors and global memory size, although the models are separated if
processors and memory are more tightly constrained.)
In the CREW (Concurrent-Read, Exclusive-Write) model, concurrent
reads are allowed, as above, but concurrent writes are not. CREW algorithms
must arrange that no two processors attempt to write into the same
global memory location at the same time.
In this paper we introduce a third variant, argue that it is a more "natu-
ral" model than the CREW PRAM, and give a surprising characterization of
its power. There are several reasons to study this restriction of the CREW-
PRAM. The CREW-PRAM model has been criticized for being too powerful
to serve as a realistic model of physically realizable parallel machines due to
its "unbounded fanin." Anderson and Snyder [1] point out that the two-stage
programming process of first using the CREW-PRAM model to develop a
straightforward fully parallel algorithm (e.g., for the "or" of n bits), and then
emulating this algorithm on a physically realizable network, could lead to a
sub-optimal algorithm (\Theta((log n) 2 ) for the above example). Nevertheless the
CREW-PRAM has arguably been the most popular theoretical model for the
design, specification and analysis of parallel algorithms, due principally to
the simplicity and usefulness of the global memory model for programmers.
It is useful therefore to consider the power of the more restricted CROW-
PRAM model, in order to understand its feasibility as a model for parallel
programming. As noted above, most CREW-PRAM algorithms are in fact
CROW-PRAM algorithms, or can be easily modified to be so.
How can a CREW-PRAM algorithm ensure that it is obeying the
Exclusive-Write restriction? With two exceptions discussed below, all
CREW-PRAM algorithms we have considered achieve, or can be easily modified
to achieve, write exclusion by the following simple stratagem: each
global memory location is "owned" by one processor, which is the only processor
ever allowed to write into that cell. Further, the mapping between
global memory addresses and processor numbers is easy to compute, so that
each processor has no difficulty in determining which cells it owns. For ex-
4ample, processor p might own the block of k consecutive cells beginning at
global memory address kp. We call this the Owner-Write restriction, and call
PRAMs that obey this restriction Concurrent-Read, Owner-Write PRAMs,
or CROW-PRAMs. The ownership restriction seems to be a very natural
framework in which to design exclusive-write algorithms. Similar but not
identical notions of "ownership" have appeared in the earlier lower bound
work of Cook, et al. [7], and have also proven useful in practice for certain
cache coherence protocols. (See, e.g., Archibald and Baer [2].) In many current
architectures of parallel systems, the machines provide a global memory
programming model, implemented using physical hardware in which every
memory cell is local to some processor. Caching or other techniques are used
to ameliorate the cost of access to non-local memory. If non-local writes are
prohibited, the necessary cache coherence algorithms are simplified. In fact,
a positive solution to the CROW versus CREW problem discussed in Section
3 would presumably suggest an interesting new approach to the cache
coherence problem.
We give a precise definition of the CROW-PRAM model in Section 2
below. The main goal of this paper is to investigate the power of the CROW-
PRAM. Unexpectedly, this question turns out to be intimately related to
the complexity of deterministic context-free language (DCFL) recognition.
The recognition problem for a deterministic context-free language L is
to decide, given a word x, whether x 2 L. The sequential complexity of
this problem has been well-studied, and there are many practical sequential
algorithms for solving it in space and time O(n). The small-space and parallel
time complexities of the problem are less well-understood. Two main results
in these areas are by von Braunm-uhl, Cook, Mehlhorn, and Verbeek [5, 38],
and by Klein and Reif [20].
Cook [5] presents a sequential algorithm for the DCFL recognition problem
that runs in polynomial time on a Turing machine using only polynomial
in log n space. This result has been improved by von Braunm-uhl et al. [38],
who give Turing machine algorithms with optimal time-space product for
any space bound in the range from (log n) 2 to n.
Building somewhat on the ideas of [5, 38], Klein and Reif [20] present
an O(log n) time CREW-PRAM algorithm for DCFL recognition. (It is
known that results of Stockmeyer and Vishkin [35] can be combined with the
CROW-PRAMs and DCFL Recognition 5
algorithm of Ruzzo [32] to yield an O(log n) time algorithm for general CFL
recognition, but only on the more powerful CRCW-PRAM model.)
Our main result is the following characterization of CROW-PRAMs.
Theorem 1 A language L is accepted by a CROW-PRAM in O(log n)
time if and only if L is log-space reducible to a DCFL.
The class LOGDCFL of languages log-space reducible to DCFLs was first
defined and studied by Sudborough [36], who showed that it is equal to the
class of languages recognizable in polynomial time by log-space bounded deterministic
auxiliary pushdown automata (DauxPDAs), defined by Cook [4].
Our result appears to be the first to precisely characterize a parallel time
complexity class (up to constant factors) in terms of a sequential one. For
example, Sudborough's ``hardest DCFL'' [36] provides a natural example of a
problem complete for CROW-PRAM time O(log n). Complete problems have
been discovered by Chandra and Tompa for CRCW-PRAM time classes [3].
We know of no analogous natural problems that are complete for CREW-
classes. Following an earlier version of our paper [12], Lange
and Niedermeier [22] established characterizations of other PRAM variants
in terms of sequential complexity classes.
We use the DCFL characterization to demonstrate the stability of
CROW-PRAM complexity classes under definitional changes. For example,
it follows from the DCFL simulation that a CROW-PRAM can be simulated
time loss by a parallel machine on which there is no global memory,
but each processor contains a single externally visible register, that may be
read (but not written) by any other processor. This model seems to be closer
to the way that some parallel machines have actually been constructed than
models with an independent global memory not associated with any processor
The DCFL recognition algorithms of von Braunm-uhl et al. [38] and Klein
and Reif [20] are difficult ones, and use superficially different approaches. The
third goal of this paper is to provide a unified approach to both problems,
which, although based on both, we believe to be simpler than either.
We obtain both a small time parallel algorithm and a small space sequential
algorithm for DCFL recognition using the same basic approach. The
small space algorithm provides an improvement to a result by Rytter [33],
and a technical refinement to the optimal results of von Braunm-uhl et al. [38].
Rytter had shown, using a sequential implementation of [20], that it is possible
to obtain a polynomial time, O(log 2 n) space algorithm for DCFL recognition
using space mainly as a pushdown store (more precisely, a log n space
DauxPDA with an O(log 2 n) bounded pushdown), rather than unrestricted
O(log 2 n) space as in [38]. We improve these results by performing our simulation
on a DauxPDA (like Rytter) while attaining a time-space product
similar to that of von Braunm-uhl et al.
Section 2 presents the CROW-PRAM model, and discusses variations
in the definition. Section 3 presents its simulation by deterministic
auxiliary pushdown automata, establishing CROW-PRAM-TIME(log n) '
LOGDCFL. Section 4 introduces some definitions and notation needed in
our DCFL recognition algorithm. Section 5 presents a high level description
and correctness proof of the DCFL recognition algorithm. Section 6
discusses CROW-PRAM implementation of the algorithm, establishing the
other inclusion needed for Theorem 1, i.e., LOGDCFL ' CROW-PRAM-
TIME(log n). Finally, Section 7 refines the simulation of Section 5 to obtain
a faster sequential algorithm than that obtained by combining the CROW-
PRAM algorithm of Section 6 with the general simulation of Section 3.
Further work involving the owned global memory concept in PRAMs
has appeared following a preliminary version of this paper [12]. Fich and
Wigderson give a lower bound separating EROW and CROW PRAMs [14].
Rossmanith introduces and studies Owner Read, Owner Write PRAMs,
showing, for example, that they can do list ranking in O(log n) time [31].
Nishimura considers the owner concept in CRCW-PRAMs [29]. Nieder-
meier and Rossmanith [27, 26] have considered the owner concept with other
PRAM variants. Lin, et al. show that CROW-PRAMs are sufficiently powerful
to execute a variant of Cole's parallel merge sort algorithm in time
O(log n) [23]. Work on further restrictions of the CROW-PRAM model by
Lam and Ruzzo [21] and Dymond, et al. [11] is described at the end of section
two.
CROW-PRAMs and DCFL Recognition 7
2 Definition of CROW-PRAMs
We start by defining the CREW-PRAM model we will use. As mentioned
above, most of the details of the definition are not critical. For specificity we
use the definition of Fortune and Wyllie [15] (called simply a P-RAM there)
which has: an unbounded global memory and an unbounded set of processors,
each with an accumulator, an instruction counter and an unbounded local
memory. Each memory cell can hold an arbitrary non-negative integer. The
instruction repertoire includes indirect addressing, load, store, add, subtract,
jump, jump-if-zero, read, fork, and halt. The input is placed in a sequence of
special read-only registers, one bit per register. The read instruction allows
any processor to read any input bit; concurrent reads are allowed. A fork
instruction causes a new processor to be created, with all local memory cells
zero, and with its accumulator initialized to the value in the accumulator
of its creator. Initially, one processor is active, with its local memory zero,
and the length of the input given in its accumulator. The model accepts if
the initially active processor halts with a one in its accumulator. It rejects
if two processors attempt to write into the same global memory location at
the same time.
These CREW-PRAMs do not have "processor numbers" or "IDs" as a
built-in concept, but we will need them. We adopt the following processor
numbering scheme. The (unique) processor active initially is numbered 0;
the first child processor created by processor i will be numbered 2i
second will be numbered 2 1). This
corresponds to the natural embedding of an arbitrary tree (the processor
activation tree) into a binary tree by the rule "eldest child becomes right
child, next younger sibling becomes left child." Reverse-preorder traversal of
the activation tree and the binary tree are identical. As we will see, many
other numbering schemes will also work; this one is fairly natural. Processors
do not automatically "know" their number, but it is easy to program them
to compute it, if needed.
algorithm is a CREW-PRAM algorithm
for which there exists a function owner(i; n), computable in deterministic
space O(log n), such that on any input of length n processor p attempts to
write into global memory location i only if
The intuitive definition given earlier said that the owner function should
be "simple". We have particularized this by requiring that it be log-space
computable and that it be oblivious, i.e., independent of the input, except for
its length. We have not required that the model detect ill-behaved programs,
i.e., ones that attempt global writes in violation of the ownership constraint.
Such programs simply are not CROW programs. These seem to be natural
choices, but we will also show that our main results are fairly insensitive to
these issues. We could generalize the model in any or all of the following
ways:
G1. Allow the owner function to depend on the input.
G2. Allow the owner function to depend on time.
G3. Allow "bounded multiple ownership", i.e., owner(i; n) is a set of size
O(1) of processor numbers.
G4. Allow ill-behaved programs, by defining the model to halt and reject
if an attempted write violates the ownership constraint.
G5. Allow any processor numbering scheme that gives processors unique
numbers and allows one to compute in logarithmic space the parent of
a given processor p, the number of older siblings it has, and the number
of its k th child.
G6. Allow the owner, parent, and sibling functions above to be computable
by a deterministic log-space auxiliary pushdown automaton that runs
in polynomial time.
Alternatively, we could restrict the model in any or all of the following ways:
R1. Require that the owner function be the identity -
This is equivalent to saying that the machine has no global memory;
instead it is a collection of processors each with a private local memory
and one globally readable "communications register"
R2. Require that processors use only O(1) local memory locations.
CROW-PRAMs and DCFL Recognition 9
R3. Require that the machine be write-oblivious, i.e., the times and locations
of writes to global memory are independent of the input, except
for its length.
One consequence of our results is that CROW-PRAMs, even ones satisfying
only the relatively weak conditions G1-G6, can be simulated by CROW-
PRAMs satisfying the strict conditions R1-R3, with only a constant factor
increase in time and a polynomial increase in number of processors.
Is it possible that CREW- and CROW-PRAMs have equivalent power?
On the positive side, conditions G1-G6 are fairly generous. It is difficult to
imagine a protocol by which a PRAM algorithm could achieve write-exclusion
that would not be covered by these. For example, note that a general CREW-
PRAM algorithm can be considered to be a CROW-PRAM algorithm where
the owner function is allowed to be input- and time-dependent (conditions
G1 and G2 above) and in some sense computable by a CREW-PRAM in real-
time. We know that, say, CREW-PRAM time O(log n) can be simulated by
a logarithmic space deterministic auxiliary pushdown automaton that runs in
time n O(log n) , so real-time CREW-PRAM computable functions may not be
that different from n O(1) DauxPDA computable ones. Thus it seems possible
that time on CROW-PRAMs and CREW-PRAMs might be identical. At
least, this provides some intuitive support for the empirical observation that
most known CREW-PRAM algorithms are CROW-PRAM algorithms.
In one context, we know the two models are equivalent. Following the
appearance of an extended abstract of this paper [12], Ragde (personal com-
munication; see also Fich [13], Nisan [28]) observed that nonuniform CROW-
PRAMs, i.e., ones having arbitrary instructions, exponentially many processors
initially active, and allowing different programs for each value of
n, running in time t are equivalent to Boolean decision trees of depth 2 t .
Nisan [28] established that for any set recognized by a (nonuniform) CREW-
PRAM in time O(log n), for each n there is a equivalent Boolean
decision tree problem of depth 2 t(n) . Taken together these results show time
on the two models is the same up to a constant factor in the nonuniform
setting. This leaves open the stronger conjecture that any set recognized
by a CREW-PRAM in time log n can be recognized on a CROW-PRAM in
time O(log n), both of the ordinary, uniform variety and both using polynomially
many processors. Note that Nisan's simulation of CREW by CROW
uses nonuniformity in a fundamental way and uses 2 2 t(n)
initially active pro-
cessors, and that in his nonuniform model all languages are recognizable in
O(log n) steps.
In one restricted setting we know the two (uniform or nonuniform) models
to be different. Suppose processors 1 through n are active, each knows one
bit b i , and we want to compute the "or" of these bits, given that at most one
b i is 1. A CREW-PRAM can solve this in one step: any processor having a 1
bit writes it into global location 0. No write-conflict can happen since there is
at most one 1 bit. However, Marc Snir (personal communication) has shown
that a CROW-PRAM
n) steps to solve this problem from the
same initial state.
Snir's result does not settle the general question, however. The problem
discussed above is defined only when at most one input bit is one. (This
has been called a "partial domain" by Fich, in contrast to the more usual
situation where an algorithm is required to produce a correct answer on all
n-bit input sequences.) We know from the results of Cook, et al. [7] that even
a CREW-PRAM requires time \Omega\Gammame/ n) to test whether its input contains at
most one 1 bit. Conceivably, a CREW algorithm that exploited something
like Snir's ``or'' could always be transformed into a CROW algorithm by using
this "preprocessing" time to better advantage.
The only full domain problem known to us where (uniform) CREW-
PRAMs seem more powerful that CROW-PRAMs is the recognition problem
for unambiguous context-free languages. For this problem Rytter [34] has
given an O(log n) CREW-PRAM algorithm that appears to use the power of
non-owner exclusive writes in a fundamental way. Loosely speaking, it seems
that the unambiguity of the underlying grammar allows one to repeatedly
exploit a feature like Snir's ``or''.
While CROW-PRAMs appear to be nearly as powerful as CREW-
PRAMs, it is interesting to compare them to a possibly weaker parallel model,
the parallel pointer machine of Dymond and Cook [10]. PPMs consist of an
unbounded pool of finite-state transducers, each with a finite set of pointers
to other processors. A PPM operates by sensing the outputs of its neighboring
processors, and moving its pointers to other processors adjacent to
its current neighbors. Cook proposed such a model as an example of the
simplest possible parallel machine with "variable structure" [6].
CROW-PRAMs and DCFL Recognition 11
Lam and Ruzzo [21] establish that time on PPMs is linearly related to
time on a restricted version of the CROW-PRAM, on which doubling and
adding one are the only arithmetic operations permitted. (In fact, they also
showed a simultaneous linear relationship between the amounts of hardware
used on the two machines.) Our conjecture that the CROW-PRAM's ability
to access two-dimensional arrays in constant time cannot be directly emulated
on a CROW-PRAM whose arithmetic capability is so limited has been
proved recently by Dymond, et al. [11]. Since two-dimensional arrays appear
to play an important part in the DCFL simulation algorithm of Section 6,
this suggests that quite different techniques would be needed to recognize
DCFLs in time O(log n) on the PPM, if this is indeed possible. An analogous
nonconstant lower bound on two dimensional array access was proved
for sequential unit cost successor RAMs by Dymond [9].
3 Simulation of CROW-PRAMs by
DauxPDAs
In this section we will prove the first half of Theorem 1, namely:
Theorem 2 Any set recognized in time O(log n) on a CROW-PRAM is
in LOGDCFL.
Recall that LOGDCFL is the class of languages log space reducible to
deterministic context-free languages. Sudborough [36] defined the class, and
characterized it as the set of languages recognized in polynomial time on a
logarithmic space deterministic auxiliary pushdown automaton.
The main construction is similar to analogous ones given by Pratt and
Stockmeyer [30], Fortune and Wyllie [15], and Goldschlager [17] showing
that PRAM time log n is contained in DSPACE(log 2 n). We define three
mutually recursive procedures:
state(t; p) returns the state of processor p at time t, i.e., after the t th instruction
has been executed.
local(t; returns the contents of location i of the local memory of processor
p at time t.
global(t; i) returns the contents of global memory location i at time t.
Each depends only on the values of these procedures at time t \Gamma 1, so the
recursion depth will be at most t. Furthermore, each procedure will require
only O(log n) bits of local storage, so by well-known techniques these procedures
can be implemented on a logarithmic space deterministic auxiliary
PDA whose pushdown height is at most O(log 2 n). This much of the proof
is essentially the same as in [15, 17, 30]. The main novelty with our proof
is that our algorithm runs in polynomial time, rather than time n log n as in
the earlier results. This is possible because the owner function allows us
in global(t; i) to directly identify the only possible writer of global memory
location i at time t \Gamma 1. This allows each of our procedures to make only
recursive calls per invocation, which gives a polynomial running time.
If we were simulating a general CREW-PRAM algorithm, it would appear
necessary to check all processors at time t \Gamma 1 to see whether any of them
wrote into i, and if so, whether more than one of them did. This appears to
require more than polynomial time.
Extensions to these basic procedures to accommodate generalizations G1-
G6 are quite direct, except for G4, ill-behaved programs. G4 is also possible,
but more delicate, since in effect we must check at each step that none of
the many non-owners attempts to write to a global cell, while maintaining
the property that our algorithm makes only O(1) recursive calls per invoca-
tion. (It is possible that a similar generalization of the CREW model would
increase its power.)
Proof of Theorem 2: Detailed descriptions of the three procedures follow.
A typical PRAM instruction is "global indirect store l", whose meaning is
"store the accumulator into the global memory location whose address is
given by the contents of local memory location l". We will not describe the
rest of the PRAM's instruction set in great detail; see Fortune and Wyllie [15].
The state of processor p at time t is an ordered pair containing the
instruction counter, and the contents of the accumulator of p at the
end of the t th step. We define three auxiliary functions accumulator(S),
instruction-counter(S), and instruction(S), that, for any state S, give the accumulator
portion of S, the instruction counter portion of S, and the instruction
pointed to by the instruction counter of S, respectively. Assume that a
value of 0 in the instruction counter designates a "halt" instruction, which by
CROW-PRAMs and DCFL Recognition 13
convention will be the instruction "executed" in each step before processor
p is activated and after it has halted. Also, assume that instruction(S) will
be a "halt" instruction if it is not otherwise defined, e.g., after a jump to a
location beyond the end of the program. It is convenient to assume that the
local memory of a processor is set to zero as soon as it halts, but its accumulator
retains its last value. We assume that processor 0 initially executes
instruction 1, and that a processor activated by a "fork l" instruction initially
executes instruction l. We also assume that each processor maintains
in local memory location 0 a count of the number of "fork" instructions it has
executed. (This count should be initially 0, and is incremented immediately
after each "fork" is executed.) It is easy to modify any PRAM algorithm
to achieve this. We also use two functions parent(p) and sibling-count(p)
that, for any processor number p, return the processor number of the parent
of p, and the number of older siblings of p, respectively. For the processor
numbering scheme we have chosen these functions are very easy to compute.
Namely, if k is the largest integer such that p is evenly divisible by 2 k , then
procedure Simulate-CROW-PRAM
comment: Main Program.
begin
ne comment: An upper bound on the running time
of the PRAM.
if state(T;
function global(t; i)
comment: Returns the contents of global memory location i at time t.
begin
store l" and
then return accumulator(S)
else return
function local(t;
comment: Return the contents of local memory location i of processor
p at time t.
begin
case
"local store i" : return accumulator(S)
"indirect local store l": if
then return accumulator(S)
return
CROW-PRAMs and DCFL Recognition 15
function state(t; p)
comment: Return the state of processor p at time t.
begin
then comment: AC is initially length of input.
return (1; n)
else comment: All other processors are idle at time 0.
return (0;
AC := accumulator(S)
IC := instruction-counter(S)
case
"indirect load l": return (IC
"global indirect load l"
"add", "sub", "read"
similar to "load"
then return (l; AC)
else return (IC
parent activated p in this step.
then return (l; accumulator(S 0
else comment: p not activated; just pass AC.
return (0; AC)
Correctness of the simulation is a straightforward induction on t. Implementation
of the procedures on an DauxPDA is also easy. Note that each
procedure has local variables requiring at most O(log n) bits of storage, so
the DauxPDA needs only that much space on its work tape. The recursion
depth is equal to the PRAMs running time, i.e., O(log n), so the pushdown
height will be at most the product of those two quantities, i.e., O(log 2 n).
Each procedure makes at most O(1) recursive calls per recursive level, so the
running time of the simulation is (O(1)) O(log n) = n O(1) . This completes the
proof of Theorem 2. 2
The simulation given above is easily adapted to accommodate the generalizations
G1-G6 to the definition of CROW-PRAMs proposed earlier. Allowing
a more general owner function, say depending on the input or on time
(G1,G2) is trivial - just add the appropriate parameters at each call. Using
a different processor numbering convention is equally easy, provided that
parent(p), and sibling-count(p) are easily computable (G5). Allowing these
functions to be log-space and polynomial time DauxPDA computable will
not effect the asymptotic complexity bounds (G6). Bounded multiple ownership
(G3), is also easy - in the global procedure, where we check whether
the owner of global memory cell i wrote into it, we would now need to check
among the set of owners to see if any of them wrote. Since this set is only of
size O(1), the running time would still be polynomial.
Changing the procedures to accommodate ill-behaved PRAM algorithms
(G4) is more subtle. The first change required is that we must now determine
the exact running time T a of the algorithm. Using some upper bound
cause us to falsely reject due to an invalid global store by
some processor after T a . The value of T a is easily determined by evaluating
state(t; halts and accepts. (If it does not
accept, there is no need to worry about ownership violations.) The second,
and more interesting change, is to check all "store" instructions by all active
processors p up to time T a , basically by doing a depth-first search of the
processor activation tree.
CROW-PRAMs and DCFL Recognition 17
procedure Simulate-G4-CROW-PRAM
comment: Modified Main Program, incorporating G4.
begin
while instruction(state(t; 0)) 6= "halt" do t
if accumulator(state(t; 0)) 6= 1 then halt and reject
T a := t
treewalk(0;
halt and accept
procedure treewalk(t; p)
comment: "Visit" processor p at each time - between t and T a , and
any descendants created during that interval. For each,
verify that no non-owner writes occur.
begin
for - := t to T a do
store l" and
then halt and reject; comment: Owner violation; quit.
then
Correctness of this procedure is argued as follows. If the CROW-PRAM
algorithm has no owner write violations, then the procedure is correct, as
before. On the other hand, suppose there is a violation, say at time t by
processor p. Our procedures correctly determine the state of the PRAM up
t. After time t, the state of the PRAM is undefined, whereas
our procedure calls return values as if the violation had not occurred. How-
ever, eventually treewalk will detect the fault. It may reject when evaluating
but on a branch of the processor
activation tree that happens to be explored before p's branch. At the latest,
however, it will detect the fault after evaluating state(t; p). We can count
on this, since our simulation is faithful up to time and the state of
the PRAM at that time contains all the information we need to deduce that
processor p is active at time t, and about to execute a "store" in violation
of the ownership constraint. Hence, eventually we will evaluate state(t; p),
detect the fault, and halt.
The running time of this algorithm is still polynomial, since treewalk(-; p)
is called exactly once for each active processor p, and there are at most
polynomially many processors to be checked.
Thus we have shown the following.
Theorem 3 Any set recognized in time O(log n) on a generalized
CROW-PRAM, i.e., one satisfying generalizations G1-G6 of the basic defi-
nition, is in LOGDCFL.
This completes the proof of the "only if" direction of Theorem 1. The
converse is shown in the following sections.
4 DPDA Definitions and Notation
We assume familiarity with deterministic pushdown automata (DPDA), as
defined for example by Harrison [19], as well as standard variations on this
model.
Our DPDAs have state set Q, input alphabet \Sigma and pushdown alphabet
\Gamma. The empty string is denoted by ffl, the length of string S by jSj, and string
concatenation by "\Delta". At each step either the current topmost pushdown
symbol is popped off the pushdown, or a single new symbol is pushed onto
the pushdown above the current symbol. We assume the transition function
is defined for every possible state, input symbol and pushdown symbol.
Thus
The DPDA begins in state q 0 with as the initial pushdown contents,
with the input head at the left of the input, and accepts by entering state q a
with fl as the only pushdown contents after having advanced the input head
to the right end of the input. We assume the DPDA never fails to read all
CROW-PRAMs and DCFL Recognition 19
the input and always empties its pushdown of all symbols except fl at the
end of the computation. Furthermore, we assume that for all oe 2 \Gamma there is
some transition pushing oe. By standard techniques (see, e.g., Harrison [19,
Section 5.6]), there is a constant c ? 0 such that the DPDA can be assumed
to have the above properties and to halt in time c \Delta n at most, with maximum
pushdown depth n, on any input of length n.
The efficient simulation of a DPDA to be described makes use of the
concepts of surface configuration and instantaneous description, which are
defined relative to a particular input
configuration is a triple (q; i; oe) where q is a state, i is an integer coded in binary
between 0 and n representing the position of the input head, and oe
represents the topmost pushdown symbol. The set of all surface configurations
is denoted U . An instantaneous description (id) of the DPDA is a pair
hu; Si where u is a surface configuration and S is a string representing
all but the topmost symbol of the pushdown (with bottommost pushdown
represented by the rightmost position of S). For convenience, we
refer to S as the stack. Thus, the initial id is h(q and the unique accepting
id is h(q a ; n; fl); ffli. An id where the stack component is ffl is called an
ffl-id. (Note an ffl-id corresponds to a pushdown of one symbol, in the surface
configuration.) For an id I = hu; Si we define height(I) to be jSj, and define
projection functions
A surface configuration is said to be popping if the transition
defined for q, x i and oe pops the pushdown, and is pushing otherwise. An id
is popping or pushing as its surface configuration is popping or pushing.
We write I 1 ' I 2 if id I 2 follows from id I 1 in one step of the DPDA on
input x, I 1 ' t I 2 if I 2 follows I 1 in exactly t steps, and I 1 ' I 2 if I 1 ' t I 2
for some t - 0.
By our definition, ids only represent configurations of the machine with
at least one pushdown symbol; if I 1 is a popping ffl-id there is no id I 2 such
that I 1 ' I 2 . Thus, a popping ffl-id is said to be blocked. This is true even
though the DPDA makes one final move from I 1 (depending on the input
state, and single pushdown symbol in the surface configuration) to
empty its pushdown.
For convenience we assume the final accepting configuration is defined
to pop, so that it will be a blocked id. We denote by hu; the id
modified so that the symbols of S 2 are placed below
the symbols of S 1 on the stack. We illustrate some of the notation with three
useful propositions.
Proposition 4 (Bottom-padding) For all surface configurations u; v and
strings
Note the converse is not true in general, but is in the following case.
Proposition 5 (Bottom-unpadding) For all surface configurations u; v
and strings S 1
and
then
Proposition 6 (Block-continuation) For all surface configurations
In addition to the restrictions on DPDAs discussed above, we assume that
no id can occur twice in a computation of the DPDA when started at any
given id [19, Section 5.6]. This justifies using ids as references to particular
points in computations. E.g., if I ' t J , we could refer to the id J to uniquely
identify the point in the computation t steps after id I.
CROW-PRAMs and DCFL Recognition 21
5 The Basic DPDA Simulation Algorithm
We now will describe a procedure to efficiently simulate a DPDA on input
x of length n. Our algorithm is motivated by the "repeated doubling" idea
used, e.g., by Fortune and Wyllie [15, 39] and Klein and Reif [20], which can
be described in our setting as follows.
Suppose we have computed for all surface configurations u 2 U and all
strings
Si). Then we could easily compute the 2 k+1 step transition function
D k+1 by composing D k with itself:
However, efficiency considerations preclude defining D k for all possible stacks.
Observing that in a computation of 2 k steps only the top 2 k symbols of the
stack are accessed, S can be "split" by writing contains
everything after the first 2 k symbols of S. (S 2 will be empty if S has length
.) Then the above could be rewritten
Although this could be used to limit the number of stacks considered to those
of length at most 2 k , there are still too many for a polynomial number of
processors to compute in O(log n) time. A key observation in constructing
an efficient algorithm is that the number of stacks that need to be considered
can be much more limited than suggested above. It will be shown that it is
sufficient to consider a polynomial-sized set of stacks, provided we use both
stack splitting and a somewhat more complicated doubling technique. To
simplify the set of stacks considered, we compute a function \Delta k in place of
described above, that gives the result after at least 2 k steps rather than
exactly steps. The advantage is that we can use appropriately chosen
break points to keep the stacks simple.
We first describe the algorithm assuming that all of the stacks are explicitly
manipulated. In Section 6, we describe a PRAM implementation that
avoids this by using a more succinct representation than the stacks them-
selves. Two functions on ids are used, \Delta k and LOW k , each of which is
defined inductively on the parameter k. For an id I returns an id
I 2 that results after t steps of the DPDA starting in id I 1 . The value of t
is implicitly determined by the algorithm itself, but it will be shown that
blocked id is reached from I 1 in less than 2 k steps - in this
case t is the number of steps needed to reach the blocked id. Formally, for
ids I 1 and I 2 , \Delta k will satisfy:
I 2 is blocked. (1)
The function LOW k returns the id I 2 that is the id of lowest height
among all ids in the computation from I 1 to \Delta k inclusive, and if there is
more than one id of minimal height in this computation, is the earliest such
id, i.e., the one closest to I 1 . More formally,
(a) I 1
(c)
Given these definitions, to determine if the DPDA accepts x, it is sufficient
to check whether
since the DPDA runs in time at most c \Delta n on any input of length n.
As discussed above it is necessary to restrict the number of stacks on
which must be defined. By careful definition of \Delta the information needed
to compute \Delta k+1 from \Delta k can be restricted to consideration of ids whose stack
contents are suffixes of stacks produced by \Delta k operating on ffl-ids, of which
there are only polynomially many, O(n) in fact. To state this more precisely,
we define SS k (mnemonic for "simple stacks") to be the set of strings over
that represent the bottom portions of stacks in ids in the range of \Delta k
operating on all ffl-ids, i.e.,
is a suffix of stack
Because contains O(n 2 ) elements - one for each u 2 U
and for each suffix of the unique stack determined by u. To motivate this
CROW-PRAMs and DCFL Recognition 23
pp pp ppp pp ppp ppp pppp pppppppppp ppppp
pp ppp ppp ppppp pp pp
pppp pppppp pppp pp
Time
Height
Stack
Figure
1: Illustrating SS k .
definition of SS k , consider the diagram in Figure 1, plotting stack height
versus time in a part of a computation of the DPDA. The diagram shows
a stack S 1 built up by a \Delta k -computation starting from hu; ffli. There must
be a complementary computation, starting at hv; S 1 i that eventually empties
this stack. In Figure 1, part of S 1 is removed in the computation starting at
continuing to hw; S 2 i. The rest of S 1 (consisting of S 2 ) is removed
later beginning at hy; S 2 i. Note that S 2 is a suffix of S 1 - which illustrates
why SS k contains not only stacks arising from \Delta k operating on ffl-ids, but
also all suffixes of such stacks.
We will show later that for k - 0, the stacks in SS k+1 are further restricted
in that each is the concatenation of two strings in SS k , i.e.,
SS
For technical reasons, it will be important to maintain the information specifying
how a stack in SS k+1 is split into two stacks from SS k , rather than
simply treating stacks as undifferentiated character strings. In the interest
of simplicity, however, we will largely ignore this issue in the current section.
It will be treated fully in Section 6.
In arguing the correctness of our algorithm, we prove the following by
induction on k.
are well-defined
for all ids with stacks from SS k ; and
and SS k satisfy properties (1), (2) and (3) above, respectively
The crux of our algorithm and its correctness proof is captured by the following
lemma, which shows that we can progress at least 2 k steps in the simulation
while simultaneously restricting attention to a limited set of stacks,
by applying \Delta k only at selected low points.
Lemma 7 (The be an id with S 2 SS k ,
let
(a) I ' t J for some t - 0,
(b) if J is unblocked, then t - 2 k , and
(c) stack(J) 2 SS k \Delta SS k .
Proof: See Figure 2, which plots stack height versus time in the computation
of the DPDA. There are three distinct cases. In the first and
simplest (not shown in the diagram), the DPDA blocks (attempting to pop
when stack height is zero) before completing 2 k steps. In the second, the
-computation from hsurface(L); ffli blocks before completing 2 k steps, but
we will argue that the overall LOW-\Delta computation does complete at least
steps. In the third case, none of the sub-computations block.
Part (a) follows directly from properties (1) and (2).
From correctness property (2), L is the lowest point in the computation
from I to L (at least), so stack(L) must be a suffix of
is in SS k by assumption. Thus, stack(L) is in SS k . From the definition of
is also in SS k . Thus, stack(J) is in SS k \Delta SS k ,
satisfying (c).
Now assume J is unblocked. Let
M \Delta stack(L). If M is itself unblocked, then from the correctness property
J is at least 2 k steps past L and part (b) follows. On the other
hand, if M is blocked but J is unblocked, then stack(L) must have non-zero
height. In this case J cannot precede \Delta k (I), since otherwise the id
succeeding J would be a point of lower height than L in the range from I
CROW-PRAMs and DCFL Recognition 25
Height
Stack
pp ppp ppppp ppppp pp
ppp pp ppp ppp ppppp
pp pppp ppppppppppp ppppp ppp pp
I
or
Time
stack (J) 2 SS k+1
stack
stack
Figure
2: The LOW-\Delta Lemma.
to \Delta k (I), inclusive, contradicting correctness property (2). It follows that
is unblocked, and part (b) again follows from correctness property (1).The expression for J in the lemma above occurs so frequently that it is
convenient to introduce a special notation for it. We define e
(L) to be
stack(L). For example, the LOW-\Delta Lemma shows that
e
steps or blocks.
Note that for I; L; J as in the LOW-\Delta Lemma, if height(L) ? 0, then J is
necessarily unblocked, and so e
necessarily progresses 2 k steps.
The Lemma applies to an id I only when its stack is in SS k . We
will need an analogous result when I has a stack consisting of two or three
segments each from SS k . The desired low point in such a stack is found
by the following Iterated LOW function. It will be useful later to define the
function to handle any constant number d of stack segments rather than just
three. See Figure 3.
ppp pp pp pp
pppp pp ppp
ppppp pppp pp pp
pppppp pp
ppp ppp ppppppp ppppp pp
Time
e
Sd
Stack
Height
I
e
Figure
3: I-LOW k .
function I-LOW k returns id
comment: Assuming I 2 U \Theta (SS k ) d , return the id of a LOW k point
of nonzero height in a computation from I, if one exists. If not,
return the resulting ffl-id.
begin
for to d do
if
then return
comment: Every segment emptied.
return hu; ffli
The desired generalization of the LOW-\Delta Lemma is the following.
CROW-PRAMs and DCFL Recognition 27
be an id with S 2
(a) I ' t J for some t - 0,
(b) if J is unblocked, then t - 2 k , and
(c) stack(J) 2 (SS k ) d+1 .
Proof: Part (a) follows from propositions (1) and (2). Let
modifies the stack of its argument only by calling
that stack(L) is a suffix of stack(I), and hence by
hypothesis is in (SS k ) d . The stack segment added by the call to e
k is in
establishing part (c).
The key point in establishing (b) is that is a "LOW k
point," hence the LOW-\Delta Lemma can be applied. Specifically, let i 0 be the
last value taken by i in the for loop, and let u 0 be the value taken by u before
the last call to LOW k . Let I
i, and L
is the last value taken by hu; Si before return. Then, letting J
and is easy to see that I ' I 0 \Delta T ,
. Now, the Lemma applies to I 0 ; In particular, if J 0
is unblocked, then it is at least 2 k steps past I 0 , hence J is at least 2 k past
I, satisfying part (b). Thus it suffices to show that J 0 is unblocked whenever
J is unblocked. There are two cases to consider. First, suppose I-LOW k
returns because height(L necessarily
be unblocked. On the other hand, if I-LOW k returns with height(L 0
then by inspection . Thus in either case J is
unblocked if and only if J 0 is unblocked, and part (b) follows. 2
In the code for I-LOW k given above we do not indicate how to determine
the decomposition of its stack parameter into d segments from SS k . In brief,
as suggested in the remark following the definition of SS k , we will retain this
decomposition information when the stacks are initially computed. Detailed
explanation of this issue is deferred to the next section.
In defining LOW k , it will be convenient to use an auxiliary function
"min", that takes as argument a sequence of ids and returns the id of minimal
height in the sequence. If there are several of minimal height, it returns the
leftmost; for our applications, this will always be the earliest in time.
Construction: We are finally ready to define \Delta k and LOW k for all
[Correctness: Following the parts of the definitions of the functions, we
provide, enclosed in square brackets, appropriate parts of the correctness
arguments establishing that
Basis I with
J if 9 J such that I ' J
I otherwise (i.e., if I is blocked);
and
[Correctness: By our assumption that for all oe 2 \Gamma there is some state
pushing oe, we see that SS 0 must be exactly \Gamma [ ffflg, which is exactly the
set of stacks in the domain of \Delta 0 and LOW 0 . By inspection, for all I in
this domain I ' t blocked. Thus (1) is
satisfied. (2) holds because there are only two points in the range of points
under consideration, and min selects the lower of these. (3) holds vacuously.]
The inductive definition of \Delta k+1 and LOW k+1 is done in two phases, first
considering ids with empty stacks, which determine SS k , then considering
ids with stacks in SS k \Gamma ffflg.
Inductive Definition of \Delta k+1 and LOW k+1 on empty stacks: (See
Figure
4.) For k - 0, and for all u
and
Basically, this procedure computes \Delta-LOW-\Delta. Assuming the computation
does not block, the id reached by the first \Delta is 2 k steps past the
starting point, and satisfies the hypothesis of the LOW-\Delta Lemma. Thus,
the subsequent LOW-\Delta pair achieves another 2 k steps progress, and keeps
the resulting stack simple (i.e., in SS k+1 ). This argument is the main ingredient
in the correctness proof, below. The case where the initial id has a
CROW-PRAMs and DCFL Recognition 29
ppp pp pp ppp ppppppp pppp pp pp
ppppppppp pppp pp
ppp pppp ppppppp ppp ppp pppp pp pp
ppp pppppp ppppppppp ppppppp pppp ppppp pp pppp pp pp ppp pp pp
Height
Stack
I
Time
or e
Figure
4: \Delta k+1 (hu; ffli).
non-empty stack will turn out to be similar, except that we need to precede
this with another LOW or two.
[Correctness: Let I = \Delta k (hu; ffli). Note that by the definition of SS k ,
so the hypothesis of the LOW-\Delta Lemma is satisfied by
I. If \Delta k+1 (hu; ffli) is blocked, then (1) is immediately satisfied. If it is not
blocked, then neither is I, so I is at least 2 k steps past hu; ffli, by property
(1). Applying the LOW-\Delta Lemma, e
is at least 2 k steps past
I, hence 2 k+1 past hu; ffli. Thus, \Delta k+1 (hu; ffli) also satisfies (1). Clearly hu; ffli
is the earliest id of height zero at or after itself, so property (2) is trivially
satisfied by LOW k+1 (hu; ffli). Property (3) follows directly from the LOW-\Delta
To complete the definition of \Delta k+1 and LOW k+1 we must now define
them on all ids with non-empty stacks S 2 SS k+1 (as defined by \Delta k+1 's
action on ffl-ids).
Inductive Definition of \Delta k+1 and LOW k+1 on non-empty stacks:
Using I-LOW k we define \Delta k+1 (I) and LOW k+1 (I) for all I = hu; Si with
as the result of the following computations (see
Time
pppp pp ppp pp ppp
pp ppppp pppppp pp ppp
ppp pppp ppp ppp
pp ppppppp pppppp ppp ppp pp
ppp ppppppppppp ppp
pppp ppppppp ppp pp
Height
Stack I-LOW k (I)
e
e
e
or
or
I
Figure
5:
Figure
and
[Correctness: Property (1) follows immediately by applying the I-LOW-\Delta
Lemma twice. Property (2) is satisfied by LOW k+1 (I) since the two points to
which min is applied subsume all the low points of all the subcomputations
comprising \Delta k+1 . Property (3) is inapplicable.]
We remark that from the I-LOW-\Delta Lemma stack(J) above may consist
of three stack segments, even though stack(I) contains only two. This is the
main reason for defining I-LOW k on more than two stack segments.
Finally, we remark that I-LOW k is the identity function on ids with
empty stack, and is equal to LOW k when
CROW-PRAMs and DCFL Recognition 31
the above definition reduces to exactly the same computation as given earlier
for the empty stack case, since
in this case. Similarly, this definition of LOW k+1 (I) also suffices in the case
when ffl. Thus, one could use these more general definitions to
handle both cases.
This completes the definitions of \Delta and LOW, and the proof of their
correctness. To summarize, the key features of this construction are that
LOW k+1 and \Delta k+1 each require only a constant number of calls to the level
procedures, they guarantee at least 2 k+1 progress in the simulation, and
they need to be defined on domains of only polynomial size. In the next
two sections we will exploit these features to give fast implementations on
PRAMs, and small space implementations on PDAs.
6 CROW-PRAM Implementation
The one important issue ignored in the discussion so far is the question of
efficiently handling the stacks. To obtain the desired O(log n) running time,
we need to manipulate stacks of
length\Omega\Gamma n) in unit time. In particular,
when defining
SS k , it is necessary to be able to split S into two segments, each a stack
in SS k . This can be done by retaining the information splitting S into
components when S is originally constructed by
In fact, the decomposition information is really the only information about
S needed to apply the inductive definitions - the actual contents of the
stacks are never consulted in the definitions, except in the base cases. This
fact allows us to replace the actual stacks with abbreviations, avoiding the
explicit manipulation of long character strings, provided the decomposition
information is kept available.
We now introduce the more succinct notation for stacks, revise the algorithms
using this notation, and then discuss the CROW-PRAM implementation
using this notation.
By definition any stack S 2 SS k is a suffix of stack
surface configuration u. We can name S by specifying k; u, and a value h
giving the length of the suffix being considered.
stack reference of level k - 0, abbreviated "(k)-reference,"
is a pair (u; h) with
the stack reference is said to be valid. A (k)-reference (u; h) is said to have
base u, height h, and level k. For convenience, ffl will also be considered a
valid (k)-reference, denoting the empty stack of height 0.
For k - 0, the algorithm will maintain an array SUMMARY k indexed
by surface configurations. The value stored in SUMMARY 0 [u] will be the
actual symbols of stack ffli)). The value of SUMMARY k+1 [u] will be
a pair of valid (k)-references, which will turn out to (recursively) specify the
actual symbols of stack (\Delta k+1 (hu; ffli)).
A valid (k)-reference (u; h) may refer to any suffix of stack
Thus, it is convenient to extend the summary notation to handle references.
The summary of a valid (k)-reference
For it is the length h suffix of SUMMARY 0 [u]. For k - 1 it is
the pair of (k)-references from SUMMARY k [u] adjusted to height h. This
adjustment is carried out as follows. Suppose SUMMARY k [u] is the ordered
pair of (k)-references (v
is the ordered pair (v 1 it is the single
(k)-reference (v 2 ; h). This corresponds to popping the referenced stack until
the desired height h is reached.
Below, we define variants r k , L k , I-L k , MIN of the functions \Delta k , LOW k ,
respectively, of Section 5, that will operate using stack references
and their summary information in place of the stacks themselves.
The function MIN behaves like the version of Section 5, except that it now
returns the surface configuration and height (rather than the full id) of the
leftmost (earliest) of those of its arguments that are of minimum height. The
code for r k only needs to be provided for the case of an empty stack. (The
definition of \Delta k (hu; Si); S 6= ffl was given in Section 5 only to support the
definition of LOW k (hu; Si) and the associated correctness assertions; it was
not otherwise used.) Finally, we note that, in the code below, the contents
of global array SUMMARY k are always set by the function r k before being
referenced by L k .
CROW-PRAMs and DCFL Recognition 33
function r 0 (u: surface) returns (surface, (0)-reference)
comment: Returns the surface and reference corresponding to
and as a side effect, stores stack in the global
array
var
begin
if u is popping then
return (u; ffl)
let hu; ffli ' hv; oei where oe
R := (u; 1)
return (v; R)
function L 0 (u: surface, R: (0)-reference)
returns (surface, (0)-reference)
comment: Returns the surface and reference of the low point in the
interval (u; R) to r 0 (u; R).
var
string
begin
return (v; ffl)
function I-L k (u: surface, R 1 sequence of (k)-references)
returns (surface, sequence of (k)-references)
comment: Returns the surface and a sequence of (k)-references defining
the stack of an unblocked low point (if any) in a computation
starting from (u; R 1 procedure handles any
fixed number d of (k)-references.
var
R: (k)-reference
begin
for to d do
return (u; ffl)
function r k+1 (u: surface) returns (surface,
comment: Returns the surface and reference corresponding to
and as a side effect, stores the summary for
stack
var
1)-reference
begin
return (v 3 ; R)
CROW-PRAMs and DCFL Recognition 35
function L
returns (surface,
comment: Returns the surface and reference corresponding to the
low point in the interval (u; R) to r k+1 (u; R).
var
sequence of (k)-references
1)-reference
begin
let R be (w; h)
be the result of prepending R 2 to the sequence S 1
return
Correctness follows from the argument given in Section 5 using the correspondence
elucidated below. We inductively define a string b
R associated
with each valid (k)-reference R as follows. With each valid (0)-reference
the string b
R consists of the length h suffix of the string
stored in SUMMARY 0 [u]. Furthermore, for each k - 0 and each valid
associate (inductively) the string b
where
By induction on k, one can show that
R, the string associated with the (k)-reference R returned by r k (u), is exactly
stack as defined in Section 5, provided that in the algorithm of
Section 5, each stack b
S is decomposed as specified by SUMMARY [S]. (Re-
call that the exact decomposition used in I-LOW k was left unspecified in
Section 5. We note that, the proofs given there, in particular the proof of
Lemma 8, the I-LOW-\Delta Lemma, hold for any decomposition of a stack in
substrings each in SS k , although we only need the proofs to
hold for the specific decomposition given by SUMMARY .)
Finally, the functions r k and L k defined above can be used for a time
O(log n) parallel algorithm for DCFL recognition on a CROW-PRAM. The
algorithm tabulates r k , SUMMARY k and L k for successively higher values
of k.
for k := 0 to dlog cn e do
for all u 2 U do in parallel
Compute store in a table in global memory. As a
side effect, store SUMMARY k [u].
for all u; v 2 U and all h - 0 for which is a valid
(k)-reference do in parallel
Compute store in a table in global memory.
Each iteration of the loop can be performed with a constant number of
references to previously stored values of r k ; L k , and SUMMARY k .
The implementation of tables indexed by surface configurations and ref-
erences, and the initialization of a unique processor for every array entry are
done using now-standard parallel RAM programming techniques; see Gold-
schlager [17] or Wyllie [39] for examples. Each surface and reference can
be coded by an integer of O(log n) bits, which can be used as a table sub-
script. These techniques also suffice to implement the above algorithm on a
CROW-PRAM satisfying restrictions R1-R3.
Since there are only O(n) surfaces and O(n 2 ) references, the number of
array entries (and hence the number of processors) can be kept to O(n 3 ) by
reusing array space rather than having separate arrays for each value of k
from 0 to log n. The values of SUMMARY k , for example, can be discarded
as soon as the values of SUMMARY k+1 have been computed.
Thus we have shown the following theorem.
Theorem 9 Every DCFL can be recognized by a CROW-PRAM satisfying
restrictions R1-R3 in time O(log n) with O(n 3 ) processors.
Theorems 2 and 9 together establish Theorem 1. We also obtain the
following corollary.
CROW-PRAMs and DCFL Recognition 37
generalizations G1-G4 can be
simulated by CROW-PRAMs subject to restrictions R1-R3 with only a constant
time loss and with only a polynomial increase in number of processors
Proof: It was shown in Section 3 that generalized CROW-PRAMs satisfying
G1-G6 can be simulated by deterministic auxiliary PDAs with log n
space and polynomial time, and thus that languages recognized by such machines
are in Sudborough's class LOGDCFL of languages log space reducible
to deterministic context-free languages. A log space-bounded reduction can
be done on a CROW-PRAM in time O(log n) using the deterministic pointer-
jumping technique of Fortune and Wyllie [15]. See Cook and Dymond [8]
for a detailed description the simulation of log space by a parallel pointer
machine in O(log n) time, and see Lam and Ruzzo [21] for a simulation of
the later model by a O(log n) time-bounded CROW-PRAM. This simulation
is easily made to obey restrictions R1-R3. Finally, by Theorem 9, the resulting
language can be recognized by a CROW-PRAM also obeying restrictions
Following appearance of an earlier version of this paper, Monien, et al. [25]
gave a CREW-PRAM algorithm for DCFL recognition that, for any ffl ?
0, uses O(log n) time and n 2+ffl processors. Their algorithm uses functions
similar to ours, and suggests an approach to improving the processor bound
of the CROW-PRAM algorithm of Theorem 9.
7 Small Space Sequential Implementation
In Section 3 we presented an algorithm for simulating an O(log n) time
CROW-PRAM by a deterministic auxPDA using polynomial time and
O(log 2 n) stack height. This, combined with Theorem 9, yields an alternate
proof of the following result of Rytter.
Theorem 11 (Rytter [33].) L is accepted by a polynomial time logarithmic
space DauxPDA if and only if L is accepted by such a machine that
furthermore uses stack height O(log 2 n).
An analogous result was previously known for nondeterministic PDAs
(Ruzzo [32]), but the best result previous to Rytter's for stack height reduction
in DauxPDAs required superpolynomial time (Harju [18]; c.f. [32] for an
alternative proof).
Corollary 12 (Harju [18].) DCFLs are in DauxPDA space O(log n) and
stack height O(log 2 n).
The following result is also a corollary.
Corollary 13 (Cook [5]; von Braunm-uhl, et al. [38].) DCFLs are in
The time bound for the algorithm sketched above, while polynomial, is
not particularly attractive. As shown by von Braunm-uhl, Cook, Mehlhorn,
recognition is in simultaneous space S(n) and time
O(n 1+ffl =S(n)) on DTMs with random access to their input tapes, for any
log 2 n - S(n) - n. Their algorithm makes general use of its
space resource, i.e., it is not used as a pushdown store, or even as a stack (in
the stack automaton sense; Ginsburg, Greibach, and Harrison [16]).
The goal of the remainder of this section is to sketch an improvement to
our algorithm to achieve time bounds matching those of von Braunm-uhl, et
al., while still using a DauxPDA. Our modifications borrow some of the key
ideas from the von Braunm-uhl, et al. constructions.
First, we outline a more direct algorithm, bypassing the simulation of a
general CROW-PRAM. In Sections 5 and 6, we presented an algorithm for
simulating a DPDA, based on the procedures r k and L k . Our procedure r k
sets the global SUMMARY k array as a side effect, and L k reads from it. It is
easy to reformulate these procedures recursively. In a fully recursive version,
r k would return the summary information as an additional component of
its function value, and accesses to SUMMARY k in L k would be replaced by
appropriate calls to r k , to (re-)compute the desired stack summaries.
Recursive procedures have a straightforward implementation on a space-bounded
deterministic auxiliary PDA. The auxPDA's work tape needs to
be long enough to hold the local variables of a procedure, and the pushdown
CROW-PRAMs and DCFL Recognition 39
height must be d times as large, where d is the depth of recursion, to hold
d "stack frames", each holding copies of the local variables, return address,
etc.
For our procedures, the local variables consist of a few integers plus a
bounded number of surfaces, requiring O(log n) space. The recursion depth is
at most dlog 2 cne. Thus, our procedures can be implemented on a DauxPDA
using space O(log n) and pushdown height O(log 2 n). Furthermore, for our
procedures, each level k makes a bounded number of calls
on level k procedures. Since the depth of recursion is O(log n), the total
number of calls is at most (O(1)) O(log n) = n O(1) . Exclusive of recursive calls,
each procedure takes time O(log n) to manipulate the surfaces, etc., plus,
if necessary O(n) to read inputs. Thus, the total time for the algorithm is
polynomial.
The main idea in improving the time bound is to generalize the construction
in Section 5 to give, for any integer d - 2, procedures r k
d , etc., that
reflect computations of length at least d k , rather than 2 k as before. This
is easily done with the machinery we have already developed. For example,
d is basically the d-fold composition of f r k
d (\Delta)) with itself. Each level
O(d) calls on level k procedures. Thus, the number
of recursive calls, which is the main component of the running time, will be
log d
Again, to keep the induction simple, we can arrange that the stacks that
need to be considered are suffixes of those built by r k+1
d (u), which turn out
to be the concatenation of suffixes of at most d stacks built by r k
d (v), for
various v's. As before, it is important that the list of these v's provides a
succinct but useful "summary" of the stack contents.
One final refinement of this idea is to simulate S(n) steps of the DPDA
in the base case of our procedures, rather than just one step. Then r k
d will
simulate at least S(n) \Delta d k steps.
Implementation of these procedures on a DauxPDA with O(log n) work
tape and O(log 2 n) stack height is straightforward, as before.
Random access to the input tape is useful in our algorithm and in von
Braunm-uhl, et al.'s for the following reason. Simulation of pop moves requires
recomputation of portions of the stack, necessitating access to the
portions of the input read during the corresponding push moves. With ordinary
sequential access to the input tape, even though repositioning the tape
head may be
time-consuming(\Omega\Gamma n)), von Braunm-uhl, et al. show that DCFL
recognition is possible in simultaneous space S(n) and time O(n 2 =S(n)), for
log This is provably optimal. Our techniques appear likely
to be useful in this case as well, although we have not pursued this.
Acknowledgements
We thank Michael Bertol, Philippe Derome, Faith Fich, Klaus-J-orn Lange,
Prabhakar Ragde, and Marc Snir for careful reading of early drafts, and for
useful discussions. Special acknowledgement is due Allan Borodin, without
whom we would never have begun this research.
--R
A comparison of shared and nonshared memory models of parallel computation.
Cache coherence protocols: Evaluation using a multiprocessor simulation model.
The complexity of short two-person games
Characterizations of pushdown machines in terms of time-bounded computers
Deterministic CFL's are accepted simultaneously in polynomial time and log squared space.
Towards a complexity theory of synchronous parallel compu- tation
Upper and lower time bounds for parallel random access machines without simultaneous writes.
Parallel pointer machines.
Indirect addressing and the time relationships of some models of sequential computation.
Hardware complexity and parallel computa- tion
Pointers versus arithmetic in PRAMs.
Parallel random access machines with owned global memory and deterministic context-free language recognition
The complexity of computation on the parallel random access machine.
Towards understanding exclusive read.
Parallelism in random access machines.
Stack automata and com- piling
A universal interconnection pattern for parallel comput- ers
A simulation result for the auxiliary pushdown automata.
Introduction to Formal Language Theory.
Parallel time O(log n) acceptance of deterministic CFLs on an exclusive-write P-RAM
The power of parallel pointer manipulation.
Implementing Cole's parallel mergesort algorithm on owner-write parallel random access machines
Logic and Algorithmic
Fast recognition of deterministic CFL's with a smaller number of processors.
On optimal OROW-PRAM algorithms for computing recursively defined functions
PRAM's towards realistic parallelism: BRAM's.
CREW PRAMs and decision trees.
Restricted CRCW PRAMs.
A characterization of the power of vector machines.
The owner concept for PRAMs.
On the recognition of context-free languages
Parallel time O(log n) recognition of unambiguous context-free languages
Simulation of parallel random access machines by circuits.
On the tape complexity of deterministic context-free lan- guages
Synchronous parallel computation - a survey
The recognition of deterministic CFL's in small time and space.
The Complexity of Parallel Computations.
--TR
Cache coherence protocols: evaluation using a multiprocessor simulation model
Upper and lower time bounds for parallel random access machines without simultaneous writes
Parallel RAMs with owned global memory and deterministic contex-free language recognition
Parallel time <italic>O</> (log <italic>n</>) recognition of unambiguous context-free languages
Parallel time <italic>O</>(log <italic>n</>) acceptance of deterministic CFLs on an exclusive-write P-RAM
The power of parallel pointer manipulation
The complexity of short two-person games
Toward understanding exclusive read
The owner concept for PRAMs
CREW PRAMs and decision trees
Fast recognition of deterministic cfl''s with a smaller number of processors
Restricted CRCW PRAMs
Parallel pointer machines
Pointers versus arithmetic in PRAMs
Stack automata and compiling
Characterizations of Pushdown Machines in Terms of Time-Bounded Computers
On the Tape Complexity of Deterministic Context-Free Languages
A universal interconnection pattern for parallel computers
Introduction to Formal Language Theory
Data-Independences of Parallel Random Access Machines
PRAM''s Towards Realistic Parallelism
Parallel Merge Sort on Concurrent-Read Owner-Write PRAM
Parallelism in random access machines
Deterministic CFL''s are accepted simultaneously in polynomial time and log squared space
The complexity of parallel computations
--CTR
Bertsch , M.-J. Nederhof, Fast parallel recognition of LR language suffixes, Information Processing Letters, v.92 n.5, p.225-229, December 2004 | CROW-PRAM;owner write;parallel algorithms;DCFL recognition |
331778 | Human-guided simple search. | Scheduling, routing, and layout tasks are examples of hard operations-research problems that have broad application in industry. Typical algorithms for these problems combine some form of gradient descent to find local minima with some strategy for escaping nonoptimal local minima and traversing the search space. Our idea is to divide these two subtasks cleanly between human and computer: in our paradigm of human-guided sample search the computer is responsible only for finding local minima using a simple search method; using information visualization, the human identifies promising regions of the search space for the computer to explore, and also intervenes to help it escape nonoptimal local minima. This is a specific example of a more general strategy, that of combining heuristic-search and information-visualization techniques in an interactive system. We are applying our approach to the problem of capacitated vehicle routing with time windows (CVRTW). We describe the design and implementation of our initial prototype, some preliminary results, and our plans for future work. | Introduction
Most previous research on scheduling, routing, and lay-out
problems has focused on developing fully automatic
solution methods. There are, however, at least two reasons
for developing cooperative, interactive systems for
optimization problems like these. First, human users
may have knowledge of various amorphous real-word
constraints and objectives that are not represented in
the objective function given to computer algorithms. In
vehicle-routing problems, for example, human experts
may know the flexibility or importance of certain cus-
tomers, or the variability of certain routes. The second
reason to involve people in the optimization process
is to leverage their abilities in areas in which humans
(currently) outperform computers, such as visual
perception, learning from experience, and strategic as-
sessment. Although both motivations seem equally im-
portant, we have used the second, more quantitative
consideration to drive our current round of research.
In this paper, we present a new cooperative
paradigm for optimization, human-guided simple search
Copyright c
American Association for Artificial Intelligence
(www.aaai.org). All rights reserved.
(HuGSS). In our current framework, the computer performs
a very simple, hill-climbing search. One or more
people interactively "steer" the search process by repeatedly
initiating focused searches, manually editing
solutions, or backtracking to previous solutions. When
invoking a focused search, the user determines which
modifications to the current solution should be con-
sidered, how to evaluate them, and what type of hill-climbing
search to use.
We have designed and implemented a prototype system
that supports HuGSS for the capacitated-vehicle-
routing-with-time-windows (CVRTW) problem. Below,
we describe the CVRTW problem and our prototype,
and report results from 48 hours of controlled testing
with our system.
Application
Problem Description and Definitions
We chose vehicle routing as our initial problem domain
for three reasons: it is commercially important;
it has a rich research history, which facilitates comparison
with previous work; and routing problems are
ones for which the human capabilities of vision, learn-
ing, and judgment should be useful. In the CVRTW
problem (Solomon 1987), trucks deliver goods from a
single central depot to customers at fixed geographic
locations. Each customer requires a certain quantity of
goods, and specifies a time window within which delivery
of the goods must commence. All trucks have
the same capacity, and travel one unit of distance in
one unit of time. Delivery takes a constant amount of
time, and each customer can receive only one delivery.
All trucks must return to the depot by a fixed time.
A solution to a CVRTW problem is an ordered list of
customers assigned to each truck, and is feasible if it
satisfies all the constraints. The optimization problem
is first to minimize the number of trucks required to
construct a feasible solution; and second to minimize
the total distance traveled by the trucks.
As we describe below, users can force the system to
consider infeasible solutions. Thus we needed to extend
the classical objective function for CVRTW to rank infeasible
as well as feasible solutions. We define the maximum
lateness of a truck as the maximum tardiness
with which it arrives at any of its customers; or if a
truck has insufficient capacity to service its customers,
we assign it an infinite maximum-lateness value. We
optimize infeasible solutions by minimizing the sum of
the maximum latenesses over all the routes. We rank
any feasible solution as better than any infeasible solution
We define a 1-ply move as the transfer of a customer
from its current route onto another route. Such a move
requires that both routes be re-optimized for distance
(if feasible) or maximum lateness (if infeasible). 1 An
n-ply move is simply a combination of n 1-ply moves.
HuGSS for CVRTW
In our system, the user controls the optimization process
by performing the following three actions:
1. Edit the current solution by making a 1-ply move.
2. Invoke a focused local search, starting from the current
solution. The user controls which n-ply moves
are considered, how they are evaluated, and what
type of search is used.
3. Revert to an earlier solution, or to an initial seed
solution generated randomly prior to the session.
We now describe each type of action in the context
of our implemented system, followed by a description
of the visualization and interface (see Figures 1 and 2)
that support these actions.
Manual edits: To edit the current solution manu-
ally, the user simply selects a customer and a route.
The system transfers the customer to the route and
re-optimizes both affected routes. Moving the last customer
off a truck's route eliminates that truck. Also,
the user can create infeasible solutions by assigning customers
with conflicting constraints, or with too much
total demand, to a single truck.
Focused searches: The principal feature of our system
is the following set of methods for allowing users
to repeatedly invoke deep, focused searches into regions
of the search space they feel are promising. The user
determines which moves the hill-climbing engine will
evaluate by:
ffl Setting a priority (high, medium, or low) for each
customer. The user controls which customers can
be moved, and the routes onto which they can be
moved, by assigning priorities to them. The search
engine will only consider moving high-priority cus-
tomers, and only consider moving them onto routes
that have no low-priority customers. For example,
the user can restrict the search engine to exchanging
customers between a pair of routes by setting all
the customers on those routes to high priority and all
other customers to low priority.
1 Computing the route for a truck once customers have
been assigned to it is an instance of the Traveling Salesman
Problem with Time Windows. Although an NP-hard prob-
lem, the instances that arose in our experiments are small
enough that exhaustive search is practical.
Figure
1: The Optimization Table.
ffl Deciding which n-ply moves (1-ply to 5-ply) to enable.
In general, deeper searches are more likely to produce
good results, but take more time.
ffl Setting an upper bound on the number of moves that
the computer can consider. The search is stopped
when all enabled moves have been considered, or
when this user-supplied upper limit is reached.
Focusing the search dramatically reduces the number
of moves that the search engine evaluates. In one example
from our experiments, we focused the search on
two of 12 routes (20 of 100 customers), which decreased
the number of 1-ply moves considered by a factor of 30,
2-ply moves by a factor of 222, and 3-ply moves by a
factor of 18,432.
In addition to determining which moves are evalu-
ated, the user determines how they are evaluated by selecting
an objective function. We currently support two
objective functions: the standard CVRTW objective
function modified to assess infeasible solutions; and a
function we call minimize-routes, which removes 2\Thetalen 2
from the cost attributed to each route that contains
customers. The idea behind this objective function
is to encourage a short route to become shorter,
even if it increases the total distance traveled, in the
hope of eventually eliminating that route.
Finally, the user can select between greedy or
steepest-descent search mode. In greedy mode, the
search engine immediately adopts any move that improves
the current solution under the given objective
function. It considers 1-ply moves (if enabled) first,
then 2-ply moves (if enabled), and so on. Within a ply,
the moves are evaluated in a random order. As soon
a move is adopted, the search engine begins, again, to
evaluate 1-ply moves.
In steepest-descent mode, moves are considered in
the same order as in greedy mode, but only the best
move is adopted. The best move is defined as the one
that decreases the cost of the solution the most, under
the given objective function. If no move decreases the
cost of the solution, then the best move is the one that
increases the cost the least. 2 Making the least-bad move
However, we never adopt a move that increases the infeasibility
of a solution. Finding and ranking all infeasible
moves is not worth the added computational expense.
Figure
2: A snapshot of our interface.
provides useful information to the user, and can always
be undone by reverting to the previous solution.
Switching among candidate solutions: The third
type of action the user can perform is to switch candidate
solutions, either to backtrack to a previous solu-
tion, or to load a precomputed, "seed" solution. The
seed solutions are generated prior to the session using
our hill-climbing search engine. They are intended to
be used both as starting points for finding more optimal
solutions and to give users a sense of how various
combinations of customers can be serviced.
Interface and Implementation
For our initial implementation we have used a tabletop
display, which we call the Optimization Table (see Figure
1). We project an image down onto a whiteboard.
This allows users to annotate candidate solutions by
drawing or placing tokens on the board, a very useful
feature. In addition, several users can comfortably use
the system together.
For this kind of problem, creating an effective visualization
is an intrinsic challenge in bringing the human
into the loop. Figure 2 shows our attempt to convey
the spatial, temporal, and capacity-related information
needed for CVRTW. The central depot is the black circle
at the center of the display. The other circles represent
customers. The pie slices in the customer circles
indicate the time windows during which they are willing
to accept delivery. The truck routes are shown by
polylines, each in a different color. At the user's op-
tion, the first and last segments of each route can be
hidden, as they are in the figure, to avoid visual clutter
around the depot. The search-control operations
described in the previous subsection are supported by
mouse operations and pull-down menus. Detailed information
about individual customers and trucks can also
be accessed through standard interface widgets.
The interface was written in Tcl, and the hill-climbing
algorithm in C++. We use a branch-and-
bound algorithm to optimize truck routes during move
evaluation. We carefully crafted several pruning rules
and caching procedures to streamline this algorithm.
Experimental Investigation
Four test subjects participated in our experiments.
Three of them are authors of this paper. The fourth
tester is a Ph.D. student unaffiliated with this project,
who received five hours of training prior to his first test.
The Solomon datasets (Solomon 1987) were our
source of benchmark CVRTW problems. This corpus
consists of 56 problem instances, each with 100 custom-
ers, divided into three categories according to the spatial
distribution of customers: C-type (clustered), R-
type (random), and RC-type (a mix of the two.) There
are two problem sets for each category: the C1, R1,
RC1 sets have a narrow scheduling horizon, while the
C2, R2, and RC2 sets have a large scheduling horizon.
As we developed and refined our system, we tested
users informally on a selection of R1 and RC1 problems.
In the second, more controlled, phase of experimenta-
tion, we ran two tests on each of the RC1 problems.
During this phase, subjects worked only on problem instances
to which they had no previous exposure. In
each test, the user spent 90 minutes working on the
problem without reference to the precomputed seed so-
lutions. Then, after an arbitrarily long break, the user
spent another 90 minutes working on the same problem,
this time with the precomputed seed solutions available
for perusal. We recorded logs for a total of 79.4 hours
of test sessions, 48 hours of which were the controlled
experiments.
We generated the seed solutions using the settings
we found to be the most effective on a small sample of
the Solomon problem instances. In particular, we used
greedy search with 1-ply and 2-ply moves enabled and
all customers set to high priority; we used the minimize-
routes objective function, and started the search from
an initial solution in which each customer is assigned
its own truck, and searched until we reached a local
optimum. Multiple runs produce varied results due to
the random order in which moves are considered in the
greedy search. We ran the algorithm repeatedly until
we had generated 1000 solutions or a 10-hour time limit
was reached. On average, it took 8.4 hours to generate
the seed solutions for a problem. We ran all our experiments
on a 500 MHz PC.
Observations
User strategies: During a session, the user repeatedly
invokes the hill-climbing engine to perform focused
searches. This simple mechanism supports a surprisingly
broad range of optimization strategies. For exam-
ple, consider the goal of truck reduction. A user might
start by browsing the precomputed seed solutions for
one with a "vulnerable" route, e.g., one that might be
eliminated because it has a small number of loosely
constrained customers, and nearby routes that have
available capacity and slack in their schedules. Having
identified such a solution, the user can shift customers
off the vulnerable route by invoking a steepest-descent
setting the route's customers to high priority
and the customers of nearby routes to medium priority
will cause the search algorithm to return the least costly
feasible move of a customer off the vulnerable route and
onto one of the nearby routes. An alternative strategy
for shortening and eliminating routes is to set all the
customers in the neighborhood of a vulnerable route
to high priority, and to use the minimize-routes objective
function and a high search ply: a search with these
parameters would consider compound moves, involving
multiple customers on different routes, that have the
net effect of shortening the vulnerable route. A third
alternative, which users often had to resort to, is to
manually move a customer off a vulnerable route, even
if the move produces an infeasible solution; fixing the
resulting infeasibility then becomes a subproblem for
which there is another suite of strategies.
User behaviors: During test sessions, our users spent
more time thinking than the search algorithm spent
searching. On average, the search algorithm was in use
31% of the time; the range was 11% to 61%. Solution
improvements were made throughout the sessions.
Averaging over all the test runs, a new best solution
was found a little over five times per hour. Of course,
improving the current solution was much more common
than finding a new best solution. Focused searches
yielded an average of 23 improvements per hour, and
manual adjustment yielded an average of 20 improvements
per hour.
Tables
show what features of the system were
used, as well as how usage varied among the test sub-
jects. (Note that some of the variation is very likely
due to differences in the nature of the individual prob-
lems.) Three of the four users primarily used steepest-descent
search instead of greedy search. We feel that
steepest-descent mode was preferred largely because it
makes the least-bad move if no good move is available,
which turned out to be a very useful feature for shifting
customers onto or off of specific routes. The minimize-
routes objective function was almost never used. Everyone
spent at least half of the time working on infeasible
solutions. All four users made substantial use of 1-ply,
2-ply, and 3-ply searches, but only two users frequently
used 5-ply search. There was a wide range among the
users in terms of how often the different priorities were
used, and in how many searches were invoked, on aver-
age, per hour.
During the controlled experiments, each user did better
than some other user on at least one data set. The
one user who was not an inventor of the system (User
D in the tables) turned out to have the best record. He
generated three of the eight best results on the RC1
problem instances, which are shown in Table 3.
Quantitative results
HuGSS vs. unguided simple search: Our results
show that human guidance provided a significant boost
to the simple search in almost all cases. Table 3 compares
the best scores on the RC1 datasets found by the
hill-climbing engine alone with the best scores found
using the HuGSS system. 3 For the hill-climbing en-
3 To interpret the scores correctly, it is important to recall
that the primary objective is to minimize the number of
User Moves Searches Percent Percent
per per steep in infeasible
hour hour searches space
A 53 47
26 72 99 76
Table
1: User styles: action and mode
User Customer priority Search ply used
high med. low 1 2 3 4 5
A 34 50
Table
2: User styles: depth and focus. The numbers indicate
the fraction of customers assigned high, medium,
or low priorities, and the frequency with which the various
ply moves were enabled. E.g., on average, subject
A assigned 34% of the customers to have high priority,
and included 3-ply moves 87% of the time.
gine, the scores are the best found in approximately 100
hours of computation on a 500 MHz Pentium PC. The
scores for the HuGSS system are the best found in at
most 10 hours of precomputation and 10 hours of guided
searching. (The table includes scores from all logged
testing and training sessions, as well as those from the
controlled experiments.) On three of the problems, the
human-guided solution uses one fewer truck; on four of
the five remaining problems, the human-guided solution
has a lower distance value. The only dataset on
which the unguided hill-climbing search prevailed was
RC101, which is the most heavily constrained of all the
problems. The very narrow time windows facilitate extremely
fast computer searches (a new local optimum
is found every six seconds), while making visualization
more difficult.
The HuGSS results in Table 3 reflect the combined
benefit of precomputed seed solutions and human-
guided search. To tease these two factors apart, we
considered the solutions produced by the first 90 minutes
of each controlled experiment, during which pre-computed
seed solutions were not available to the user.
In
Table
4 we report these results in two ways: the
average of the two scores available for each dataset represents
what can be achieved with 1.5 hours of pure
guided search (i.e., guided search without the benefit of
precomputed seed solutions); the best of the two scores
for each dataset represents what can be achieved in 3.0
hours of pure guided search, albeit using two people
for separate 1.5-hour sessions. The table also shows
the average results obtained by the hill-climbing engine
without human guidance. 4 From this data we can con-
trucks, which often works against the secondary concern of
minimizing total distance traveled. Additionally, it is standard
practice in the literature to report results by averaging
the trucks and distances over many problem instances.
4 We estimated the average value of computer-only search
by simple human-guided solution
search simple search
Veh. Dist. Veh. Dist. Veh. Dist.
Ave. 12.0 1373 11.63 1397 11.50 1364
Table
3: Best solutions found during 800 hours of simple
search compared to 67.2 hours of precomputation and
79.4 hours of human-guided search. The best published
solutions are shown for comparison.
clude that 1.5 hours of pure human-guided searching is
comparable to about 5.0 hours of unguided hill climb-
ing. However, 3.0 hours of pure guided searching is
better than 20.0 hours of unguided hill climbing, which
indicates that additional time is of more benefit to the
guided regime than to the unguided one. The average
score for 3.0 hours of guided search with precomputed
seed solutions is also shown: the seed solutions impart
a distinct benefit, but are not the sole factor behind the
dominance of HuGSS over unguided simple search.
HuGSS vs. state-of-the-art techniques: The
Solomon datasets are a very useful benchmark for comparing
all the different heuristic-search techniques that
have been applied to the CVRTW problem, including
tabu search and its variants, evolutionary strategies,
constraint programming, and ant-colony optimization.
Table
4 includes performance data for these techniques
and others. The scores we obtained with the full HuGSS
approach (i.e., with precomputed seed solutions) are
competitive with those obtained by the state-of-the-
art techniques, dominating several of them, and being
clearly dominated only by the results from a recent genetic
algorithm (Homberger & Gehring 1999).
However, the full HuGSS technique uses between one
and two orders of magnitude more computational effort
than other techniques. Other algorithms may benefit
from a comparable amount of computation, but there
is not enough information in the cited papers to accurately
assess how much benefit to expect, if any.
To test whether the HuGSS approach for this problem
can be effective with less computational effort, we
ran a pilot set of experiments with the latest version of
our system (its improvements over the system described
above are listed in the concluding section of this paper).
In these experiments, we used only 90 minutes of pre-computation
and 90 minutes of guided search. We ran
one test per problem, with three of the test subjects
for N hours of computation by taking the best score found
in N hours of computation randomly sampled from the 100
hours of unguided search we recorded for each problem in-
stance. We repeated this 1000 times for each problem and
report the average result.
from the first set of experiments. (In some cases, the
subjects worked on a problem instance that they had
worked on some months earlier.) As shown in Table 4,
we achieved comparable results with our new system
with significantly less computational and human effort,
thus closing the gap with the state-of-the-art systems.
In summary, these results suggest that human guidance
can replace the painstakingly crafted, problem-specific
heuristics that are the essence of other approaches
without significant compromise in the quality
of the results.
Time Veh. Dist.
Our hill- 1 hour 12.35 1424
climbing 2 hours 12.23 1416
search engine 5 hours 12.15 1403
alone 8.4 hours 12.13 1390
hours 12.06 1388
HuGSS 1.5 hours 12.13 1432
(w/out seeds) 3 hours 12.00 1413
hours precomp- 11.88 1389
(with seeds) utation and 3 hours
guided search on
500 MHz machine
HuGSS 90 min. precomp- 11.88 1380
(pilot experi- utation and 90 min.
ments with guided search on
newest system) 500 MHz machine
Carlton'95 a - 13.25 1402
Rochat and 44 min. on 100 12.38 1369
Taillard'95 MHz machine
Chiang and - 11.88 1397
Taillard 3.1 hours on 50 11.88 1381
et. al.'97 MHz machine
De Backer and - 14.25 1385
1 hour on 100 12.00 1361
MIPS machine
hours on 100 12.00 1360
MIPS machine
Cordone and 12.1 min on 12.38 1409
Gambardella
and Taillard'99 167MHz, 70 Mflops
Sun UltraSparc
Kilby, 48.3 min. on 12.12 1388
Prosser and 25 Mflops/s
Homberger and 5 hours on 200 11.5 1407
Best published About 15 years 11.5 1364
solutions on multiple machines
a As reported by (Taillard et al. 1997).
b As reported in (Homberger & Gehring 1999).
c As reported in (Gambardella, Taillard, & Agazzi 1999).
Table
4: Reported results. The numbers are averages
over the eight instances in Solomon's RC1 problem set.
Versatility
Because the user is directing the search, our system can
be used for tasks other than the classic CVRTW optimization
task. For example, it can be used to balance
routes. Many of the best solutions found by state-of-
the-art methods might be unsuitable for real use because
they assign only one or two customers to a truck.
The users of our system can direct the hill-climbing engine
to find the lowest cost way of moving N customers
to a particular truck, by only enabling N-ply moves
and setting the priorities so that the search engine only
considers moving customers onto the target truck.
Alternatively, it may be desirable to have a lightly
loaded truck as a backup if other trucks encounter significant
delays. This can be accomplished by the same
means used in attempting to eliminate a truck. Sim-
ilarly, in the case where there simply are not enough
trucks to satisfy all the customers' needs, our system
can be used to explore various infeasible options. It is
often easy to shift the infeasibility around the board, if
in fact some customers are more flexible than others.
Of course, other algorithms might be modified to
solve any of these tasks. The ability of our system
to handle these tasks without any recoding (or even
recompiling!) suggests that it will be more effective
at handling new tasks as they arise. Furthermore, it
demonstrates that our system can be used to pursue an
objective function that is known by the human users
but is difficult to describe to the computer algorithm.
In this regard, HuGSS is distinctly more versatile than
the algorithms cited in Table 4.
Related Work
The HuGSS paradigm is one way of dividing the work
between human and computer in a cooperative optimization
or design system. Other interface paradigms
organize the cooperation differently.
In an iterative-repair paradigm, the computer detects
and resolves conflicts introduced by the human user. In
a system for scheduling space-shuttle operations (Chien
et al. 1999), the computer produces an initial schedule
that the user iteratively refines by hand. The user can
invoke a repair algorithm to resolve any conflicts introduced
Another way for the computer to address conflicts or
constraint violations is to not let the user introduce
them in the first place. Constraint-based interfaces
are popular in drawing applications, e.g., (Nelson 1985;
Gleicher & Witkin 1994; Ryall, Marks, & Shieber 1997).
Typically the user imposes geometric or topological
constraints on a nascent drawing such that subsequent
user manipulation is constrained to useful areas of the
design space.
The interactive-evolution paradigm offers a different
type of cooperation: the computer generates successive
populations of novel designs based on previous ones,
and the user selects which of the new designs to accept
and which to reject (Kochhar & Friedell 1990;
Sims 1991; Todd & Latham 1992).
A related but very different line of inquiry takes
human-human collaboration as the model for cooperative
human-computer interaction, e.g., (Ferguson &
Allen 1998). The emphasis in this work is on mixed-initiative
interaction between the user and computer
in which the computer has some representation of the
user's goals and capabilities, and can engage the human
in a collaborative dialogue about the problem at hand
and approaches to solving it.
The HuGSS paradigm differs significantly from
the iterative-repair, constraint-based, and interactive-
evolution paradigms in affording the user much more
control of the optimization/design process. By setting
customer priorities and specifying the scope of the local
search, the user decides how much effort the computer
will expend on particular subproblems. And there are
no dialogue or mixed-initiative elements in our system:
the user is always in control, and the computer has no
representation of the user's intentions or abilities.
Other researchers have also allowed a user to interact
with a computer during its search for a solution to
an optimization or constraint-satisfaction problem, e.g.,
(Choueiry & Faltings 1995; Smith, Lassila, & Becker
1996); one group has even applied this idea to a vehicle-
routing problem (Bracklow et al. 1992). We believe,
however, that HuGSS embodies a stronger notion of human
guidance than previous efforts. Furthermore, our
work is the first rigorous investigation of how human
guidance can improve the performance of an optimization
algorithm.
Future Work And Conclusions
The contributions of this work are novel mechanisms for
the interactive control of simple search, an application
of these mechanisms to a vehicle-routing problem, and
an empirical study of that application.
We are currently making our hill-climbing engine
more efficient and our interface more interactive. The
user now receives feedback from the hill-climbing engine
that indicates the current depth of the search and
the best move found to that point. The user can halt
the search at any time, at which point the system returns
the best solution found so far. This gives the
user a much higher degree of control of the system
and effectively removes the need to decide, in advance,
the search depth, the maximum number of moves to
evaluate, and blurs the distinction between greedy and
steepest-descent search. Our pilot experiments (see Table
indicate that these changes greatly improve our
system.
We had two principal motivations for investigating
human-guided search: to exploit human perceptual
and pattern-recognition abilities to improve the performance
of search heuristics, and to create more versatile
tools for solving real-world optimization problems.
Our initial investigations show that human guidance
improves simple hill-climbing search to world-class levels
for at least one optimization task. We are also encouraged
by the system's pliability and transparency:
users pursued a variety of strategies, developed their
own usage styles, and were highly aware of what the
search engine was doing and why.
The separation made in HuGSS between the human's
and the computer's roles has several pleasant conse-
quences. The optimization engine is more generic and
reusable than those used in state-of-the-art, problem-specific
systems; and many of the user-interface concepts
are also easily generalized to other problems. This
raises the possibility of developing a general toolkit for
creating a family of human-guided optimization tools.
Acknowledgments
We are very grateful to Wheeler Ruml for his help in
making our experiments possible and his prowess at op-
timization, and to Kori Inkpen, Ken Perlin, Steve Pow-
ell, and Stacey Scott for their comments and discussion.
--R
Interactive optimization improves service and performance for Yellow Freight system.
A Tabu Search Approach to the General Vehicle Routing Problem.
A reactive tabu search metaheuristic for the vehicle routing problem with time windows.
Automating planning and scheduling of shuttle payload operations.
Using abstractions for resource allocation.
A heuristic for the vehicle routing problem with time windows.
Trips: An integrated intelligent problem-solving assistant
Drawing with constraints.
Two evolutionary metaheuristics for the vehicle routing problem with time windows.
5: Detailed results of the experiments Kilby
User control in cooperative computer-aided design
Probabilistic diversification and intensification in local search for vehicle routing.
Glide: An interactive system for graph drawing.
A new local search algorithm providing high quality solutions to vehicle routing problems.
APES group
Algorithms for the vehicle routing and scheduling problems with time window constraints.
A tabu search heuristic for the vehicle routing problem with soft time windows.
Transportation Science 31
Evolutionary Art and Computers.
--TR
Algorithms for the vehicle routing and scheduling problems with time window constraints
User control in cooperative computer-aided design
Artificial evolution for computer graphics
Drawing with constraints
An interactive constraint-based system for drawing graphs
Juno, a constraint-based graphics system
Evolutionary Art and Computers
Column-Based Strip Packing Using Ordered and Compliant Containment
--CTR
Aaron Ceglar , John F. Roddick , Paul Calder, Guiding knowledge discovery through interactive data mining, Managing data mining technologies in organizations: techniques and applications, Idea Group Publishing, Hershey, PA,
Daniel A. Keim , Stephen C. North , Christian Panse, CartoDraw: A Fast Algorithm for Generating Contiguous Cartograms, IEEE Transactions on Visualization and Computer Graphics, v.10 n.1, p.95-110, January 2004 | combinatorial optimization;operations research;interactions systems;vehicle routing;computer-human interaction;informtaion visualization |
332405 | Two-handed input using a PDA and a mouse. | We performed several experiments using a Personal Digital Assistant (PDA) as an input device in the non-dominant hand along with a mouse in the dominant hand. A PDA is a small hand-held palm-size computer like a 3Com Palm Pilot or a Windows CE device. These are becoming widely available and are easily connected to a PC. Results of our experiments indicate that people can accurately and quickly select among a small numbers of buttons on the PDA using the left hand without looking, and that, as predicted, performance does decrease as the number of buttons increases. Homing times to move both hands between the keyboard and devices are only about 10% to 15% slower than times to move a single hand to the mouse, suggesting that acquiring two devices does not cause a large penalty. In an application task, we found that scrolling web pages using buttons or a scroller on the PDA matched the speed of using a mouse with a conventional scroll bar, and beat the best two-handed times reported in an earlier experiment. These results will help make two-handed interactions with computers more widely available and more effective. | INTRODUCTION
Many studies of two-handed input for computers have often
shown advantages for various tasks [1, 3, 7, 9, 15].
However, people rarely have the option of using more than
just a mouse and keyboard because other input devices are
relatively expensive, awkward to set up, and few applications
can take advantage of them. However, increasing
numbers of people now do have a device that they carry
around that could serve as an extra input device for the
computer: their Personal Digital Assistant (PDA). PDAs,
such as 3Com's Palm Pilots and Microsoft's Windows CE
devices, are designed to be easily connected to PCs and
have a touch-sensitive screen which can be used for input
and output. Furthermore, newer PDAs, such as the Palm V
and the HP Jornada 420, are rechargeable, so they are supposed
to be put in their cradles next to a PC when the user
is in the office. Therefore, if using a PDA in the non-dominant
hand proves useful and effective, it should be
increasingly easy and sensible to deploy and configure using
hardware devices that users already have.
Another advantage of PDAs over the input devices studied
in previous experiments is that they are much more flexi-
ble. PDAs have a display on which virtual buttons, knobs
and sliders can be displayed, and they can be programmed
to respond to a wide variety of behaviors that can be well-matched
to particular tasks. However, a disadvantage is
that the controls on the PDA screen are virtual, so users
cannot find them by feel. Research is therefore needed to
assess how well the PDA screen can work as a replacement
for other input devices that have been studied for the left
hand.
This paper reports on several experiments that measure
various aspects of using a PDA as an input device in the
non-dominant hand. Two experiments are new and are designed
to measure the parameters of using a PDA. One
experiment repeats an earlier study but uses a PDA in the
non-dominant hand. Since the actual pragmatics of input
devices can have a large impact on their effectiveness [2,
8], we wanted to determine whether the results seen in
prior experiments would also apply to using PDAs.
In summary, the results are:
x People can quickly and reliably hit large buttons drawn
on the PDA with their left hands without looking. 99%
of the button taps were correct on buttons that are 1-inch
square in a 2x2 arrangement. With a larger number of
smaller buttons, the accuracy significantly decreases:
95% were correct for 16 buttons that are inch on a
side arranged 4x4. The time from stimulus to button tap
Submitted for Publication
was about 0.7 sec for the large buttons and 0.9 seconds
for the smaller buttons.
x In a task where the subjects had to move both hands
from the keyboard to the PDA and the mouse and then
back, we found that it took an average of 0.791 seconds
to move both hands from the devices to the keyboard.
This was about 13% longer than moving one hand from
the mouse to the keyboard (which took 0.701 sec).
Moving to a PDA and mouse from the keyboard took an
average of 0.838 seconds, which is about 15% longer
than moving one hand to the mouse (0.728 seconds).
x In a repeat of the experiment reported in [15], subjects
were able to scroll large web pages and then select a link
on the page at about the same speed using buttons or a
scroller on a PDA compared to using the mouse with a
conventional scroll bar. The times we found for scrolling
with buttons on the PDA were faster than any of the
times in the earlier study, including the 2-handed ones.
RELATED WORK
There have been many studies of using input devices for
computers in both hands, but none have tested PDAs in the
left hand, and we were unable to find measurements of
homing times from the keyboard to devices for two-handed
use.
One of the earliest experiments measured the use of two
hands in a positioning and scaling task and for scrolling to
known parts of a document [3]. This study found that people
naturally adopted parallel use of both hands and could
scroll faster with the left hand. Theoretical studies show
that people naturally assign different tasks to each hand,
and that the non-dominant hand can support the task of
the dominant hand [6]. This has motivated two-handed
interfaces where the non-dominant hand plays a supporting
role, such as controlling other drawing tools [9] and
adjusting translation and scaling [3, 15]. Other studies
have tested two-handed use for 3D interaction [1, 7] and
found the extra input to be useful.
There has been prior work on using PDAs at the same time
as regular computers for various tasks including meeting
support [11], sharing information [12], and to help individuals
at their desks [10], but we found no prior work on
measuring performance of non-dominant hand use of
PDAs.
Two new studies were performed. In the first, the subjects
did five tasks in a row. The first task was a typing test to
see how fast the subjects could type. Next, they performed
a button size task to measure the error rates and speeds
when tapping on different size buttons on the PDA. Next,
the subjects performed a homing speed task where we
measured the how quickly the subjects moved among the
keyboard and the devices. Finally, they performed a
scrolling task using a variety of devices, which is a repeat
of an earlier experiment [15]. The subjects reported a
number of problems with the scrolling devices on the PDA
in the last task, so we redesigned the scrolling devices, and
in a second study with new subjects, we evaluated the performance
of the new scrollers on the same task. Each of
these experiments is described below.
Apparatus
Subjects sat at a normal PC desktop computer that was
running Windows NT. On the right of the keyboard was a
mouse on a mouse pad. On the left was an IBM WorkPad
8602-30X PDA (which is the same as a Palm IIIx). In the
first study, we put the WorkPad in its cradle. Subjects
complained that the WorkPad was wobbly it its cradle, so
for the second study, the new subjects used a WorkPad
resting on a book and connected by a serial cable to the
PC. There were no further comments about the positioning
The WorkPad has a 3-inch diagonal LCD display screen
(about 2 inches on a side) which is touch sensitive. It is
pixels. Figure 1 shows a picture of the full Work-
Pad.
The software running on the WorkPad was the Shortcutter
program [10] that allows panels of controls to be created so
that each control sends specified events to the PC. The
software on the PC consisted of various applications specifically
created for this experiment (except in the scrolling
task, which used the Netscape browser running a custom
JavaScript program to collect the data).
Typing Test
We used a computerized typing test called "Speed Typing
Test [14]. The subjects were asked to type a paragraph
displayed on the screen as quickly as possible.
Button Size Task
In this task, the PDA displayed between 4 and 16 buttons
in eight different arrangements: 2 rows by 2 columns, 2x3,
3x2, 2x4, 4x2, 3x4, 4x3, and 4x4 (see Figure 1). To control
for ordering effects, half of the subjects used the order
shown above, and the other half used the reverse order
(4x4 first down to 2x2 last). In the 2x2 condition, the buttons
were about one inch square, and in the 4x4, they were
about inch square.
At the beginning of each condition, a picture was displayed
on the PC screen showing the corresponding layout
of the buttons (with the same size as the PDA). Then one
of the buttons was shaded black (see Figure 2). The subjects
were asked to tap on the corresponding button on the
WorkPad as quickly and accurately as possible with a finger
on their left hand. The stimulus button was then
cleared on the PC and the next stimulus button was shaded
500 milliseconds later. The stimuli appeared in random
order. A total of 48 stimuli were used in each condition.
Every button appeared the same number of times. For ex-
ample, for the layout of 2 rows by 2 columns, each button
appeared 12 times, while for the layout of 3 rows by 4 col-
umns, each button appeared 4 times. There was a break
after each condition. Our hypotheses were that people
could more accurately select among fewer, larger buttons,
and that people could make selections without looking at
the WorkPad.
Figure
1. Left: a picture of a Palm Pilot (the WorkPad is simi-
lar) showing the 2x2 layout of buttons. Right: the screens for
3x2, 2x3, 4x3 and 4x4. The other layouts are similar.
Figure
2. Part of the PC screen showing the stimulus during the
4x3 condition of the button task.
Homing Speed Task
The purpose of this task was to measure the times to move
the hands back and forth from the keyboard to the mouse
and WorkPad as the subjects switch between two-handed
selection operation and two-handed typing. We compared
moving a single hand to and from the keyboard to moving
both hands.
There were three conditions with three trials in each. In
each trial, 14 textboxes were shown on the screen with a
label in front of each. The conditions were that the subjects
had to first select a text box by either clicking in the field
with the mouse in the usual way, tapping on a full-screen
button on the WorkPad (which therefore worked like a
"TAB" key and caused the cursor to jump to the next
field), or tap on the WorkPad and click the mouse at the
same time. In other words, the selection operation in this
last condition was like a "Shift-Click" operation in which
the button on the WorkPad was treated as a Shift key. After
the textbox was selected, the subjects typed the word
indicated on the left of the textbox. The word was either
"apple" or "peach" (in alternating order). These words
were chosen because they are easy to type and remember,
and they start and end with keys that are under the opposite
hands. The user could not exit the field until the word
was typed correctly. After typing the word correctly into
the textbox, the subject then continued to perform the same
selection-typing operation in the next textbox. The trial
ended when all 14 textboxes on the screen were filled in.
There was a break after each trial. We measured the time
from the mouse and WorkPad click to the first character
typed, and from the last character typed to the first movement
of the mouse or tap on the WorkPad. We did not
count the time spent actually typing, and we eliminated the
times for the first and last words, because they were biased
by start-up and transients.
We hypothesized that moving to the WorkPad and the
mouse would not take much longer than moving one hand
since people would move both hands at the same time. We
were also interested in the actual numbers for the time
measurements. These might be used with future models of
human performance for two-handed homing tasks.
(a) (b) (c)
Figure
3. (a) The button scroller on the WorkPad used in the
first experiment. (b) The Slide Scroller and (c) Rate scroller used
in both experiments.
Scrolling Task
For this task, we were able to replicate the conditions of a
previous experiment [15] exactly. 1 The purpose of this task
was to evaluate and compare subjects' performance in
scrolling web pages in a standard browser using different
scrolling methods. The web pages contain text from an
IBM computing terminology dictionary, and each page is
about 12 screen-fulls of text. In each web page a hyperlink
with the word "Next" is embedded at an unpredictable lo-
cation. The subjects were asked to find the target hyperlink
by scrolling the web page using the different scrolling
mechanisms. Once the link was visible, they used the
mouse in the conventional way to click on it. Clicking on
the hyperlink brought the subject to the next web page. For
each condition, the subjects first performed a practice run
of pages, during which they were asked to try out the
scrolling method without being timed. Then, the subjects
did two consecutive trials of 10 pages each as fast as they
could.
Thanks very much to Shumin Zhai of IBM for supplying the experimental
material from the earlier study.
(a) (b)
Figure
4. (a) The revised button scroller on the WorkPad used
in the second experiment. (b) The "absolute scroller".
The condition with the fastest time in the previous experiment
used a "pointing stick" joystick to scroll, but we were
not able to reproduce this condition. 2 The conditions we
used in our first experiment were:
x Scrolling using the mouse and the regular scroll bar.
x Scrolling using a "scroll wheel" mounted in the center
of the mouse (a Microsoft "IntelliMouse"). We were
careful to explain to the subjects the three different ways
the wheel can be used, including rolling the wheel, or
tapping or holding the wheel down to go into "scroll
mode" where the further you move the mouse from the
tap point, the faster the text scrolls. The subjects could
choose which methods to use.
x Scrolling using buttons on the WorkPad (see Figure 3a).
There were 6 buttons that scrolled up and down a line,
up and down a page, and left and right (which were not
needed for this experiment). The buttons auto-repeated
if held down.
x Scrolling using a "slider" on the WorkPad (see Figure
3b). Putting a finger on the slider and moving up or
down moved the text the corresponding amount. If you
reach the edge of the slider, then you need to lift your
finger and re-stroke. Tapping on the slider has no effect
since only relative movements are used.
x Scrolling using a "rate scroller," which acted like a rate-controlled
joystick with three speeds (see Figure 3c).
Putting a finger on the WorkPad and moving up or
down started the text moving in that direction, and
moving the finger further from the start point scrolled
faster.
The order of the conditions was varied across subjects.
Revised Scrolling Task
We received a number of complaints and suggestions about
the scrollers on the WorkPad in the first session, so we re-designed
some of them and repeated the scrolling task in a
second study with new subjects. In this study, we only used
four buttons for the button scroller (since the left and right
We did not have a pointing stick to test, and anyway, it would have been
difficult to connect one to the computers we had, which illustrates one of the
claims of this paper-it can be difficult to connect multiple conventional
input devices to today's computers. Since the experimental set up was identical
to the original experiment [15], it should be valid to compare our times
with the times reported there.
buttons were not needed-see Figure 4a). We also tried to
improve the rate scroller, by adjusting the scroll speeds
and the areas where each speed was in affect. Finally, we
added a new (sixth) condition:
x Scrolling using an "absolute scroller," where the length
of the scroller represented the entire document, so putting
a finger at the top jumped to the top of the
document, and the bottom represented the bottom (see
Figure 4b). The user could also drag up and down to
scroll continuously. Therefore, it was as if the scroll
bar's indicator was attached to the finger. The motivation
for this scroller was that we noticed that most
people in the mouse condition of the first session
dragged the indicator of the scroll bar up and down, and
we wanted to provide an equivalent WorkPad scroller.
Subjects
There were 12 subjects in the first study, which took about
an hour and they were paid $15 for participating. 12 different
subjects did the second study, which took about
hour and they were paid $10. All subjects were Carnegie
Mellon University students, faculty, or staff. 25% (6 out of
were female, and the age ranged from 19 to 46 with a
median of 26. All were moderately to highly proficient
with computers, and half had used PDAs. The data from
some extra subjects were eliminated due to technical diffi-
culties. The measures from two subjects who were left-handed
are not included in the data, but informally, their
numbers did not look different.
General
Pearson product-moment correlation coefficient between
typing speed and tap speed in the button size task (namely
the mean tap speed across all 8 layouts) was .60, which
means the faster typists were somewhat faster at tapping.
The correlation coefficient between typing speed and the
speed for moving one hand to the keyboard in the homing
task was .79 which means, as expected, subjects who were
better typists could put their hands in the home position
more quickly. There was little correlation of typing speed
to the other measures in the homing task. The correlation
coefficient between typing speed and scrolling speed (in
the revised scrolling task) across all 6 conditions and both
trials was 0.32, which means there was little correlation
for the scrolling task.
Age and gender did not affect the measures.
Button Size Task
Figure
5 shows the times to tap on the button measured
from the time the stimulus appeared on the PC monitor.
These numbers only include correct taps. There were two
orders for the trials, so each condition was seen by some
subjects early in the experiment, and by other subjects
later. The chart presents the data for the early and late
cases along with the average of both.
msec
Ear ly Late Ave r age
Figure
5. Times to tap each button depend on the size. The
times are shown for the subjects who saw each condition later.
Figure
6. Plot of all times for the 2x2 layout shows (on the left)
learning happening for those subjects who saw this condition
first, but not (on the right) for those who saw it last.
Figure
6 shows the times to tap on a button in the 2x2 trial
for each of the buttons for each of the subjects. The left
graph is of those subjects who saw the 2x2 condition first,
and roughly matches the power law of practice. However,
for those subjects who did the 2x2 condition last, there was
no apparent learning during that trial, and the times are
flat. Therefore, we feel it is more valid to use the times
from only the subjects who saw the condition later. The
average time for just the second set is 593msec.
As shown in Figure 5, and predicted by Fitts's law [5, p.
55], the time to tap on a button is inversely proportional to
the size of the button, ranging from 593 msec in the 2x2
condition to 867 msec in the 4x4 (for those the subjects
who saw each condition later).
The times to tap differ significantly among different numbers
of buttons There is significant
interaction between button number and order of conditions
(2x2->4x4 or 4x4->2x2) but the
learning effect is most prominent among layouts with
small number of buttons. The Tukey Test at .05 significant
level indicates that there is no significant difference between
the 4-button condition and the 6-button condition,
between the 6-button and 8-button, or between the 8-button
and 12-button. However, the 12-button condition is faster
than the 16-button condition by a statistically significant
margin.
The times for different layouts of the same number of buttons
is not statistically significant, however: the Tukey
Test at .05 significant level indicates that times for the 2u3
are not statistically different from 3u2, 2u4 compares to
4u2, and 3u4 compares to 4u3.
Figure
7 shows the error rates for the various configura-
tions, which varies from 1.04% to 4.17% for the subjects
who saw each condition later. The error rates do not differ
significantly among different layouts
among different numbers of buttons p=.07). For
the 4u4 layout, 45% of the errors were in the wrong row,
48% were in the wrong column, and 7% were wrong in
both (on the diagonal from the correct button). There was
no consistent pattern of where the problematic buttons
were located (see Figure 8).
1.04 2.08 4.17 2.43 2.08 2.43 4.17020406080100
percent
Ea r ly La te Ave ra ge
Figure
7. Error rates for each condition of the button task.
Numbers shown are for the subjects who saw each later.
Figure
8. Percent of the taps in each button that were in error in
the 4x4 layout.
Homing Speed Task
Figure
9 shows the times for moving each hand in the
various conditions of the homing speed task. When moving
only one hand at a time (top 4 rows), the subjects took
728 msec to move to the mouse and 701 msec to move
back to the keyboard from the mouse. The times to move to
the PDA were 744 msec to the PDA and 639 back.
When required to move both hands, the subjects took only
slightly longer, requiring about 15% more time to acquire
both the PDA and the mouse (838msec), and about 12%
more time to acquire the keyboard (791 msec).
1H Keyboard->Mouse 728
1H Keyboard->PDA 744
1H Mouse->Keyboard 701
1H PDA->Keyboard 639
Keyboard -> Mouse&PDA 838 15.1%
Mouse&PDA -> Keyboard 791 12.8%
Figure
9. Times in milliseconds to move hands. "1H" means
when only one hand is moving. The third column shows the percent
slowdown of moving both hands compared to the
corresponding one-handed mouse time.
Scrolling Task
As in the study we reproduced [15], the time for the first
trial with each input device was for practice, so Figure 10
shows the times for the second and third trials.
8.33%
5.56%
2.78%
Key:
Rates:
msec
msec
.2
.3
8 9 .4
s ec
ous e S cro W heel B ut ton S cr ol er S de S cro er R at e S cr ol er
Figure
10. Times in seconds to scroll through 10 pages in trials
2 and 3 of the first version of the web page scrolling task using
different input devices.
.4
.3
.3
ous e Scr ol W hee B u tton S cr ol er
l de Scr ol er R ate Scr ol er
Figure
11. Ratings of the various input methods by the subjects
in the first version of the scrolling experiment. We used the same
scale as [15].
A repeated measure variance analysis shows that subjects'
completion time was significantly affected by input method
significantly faster
than Trial 2 showing a learning ef-
fect. However, this improvement did not alter the relative
performance pattern of the input method
Taking the Mouse condition as the reference and averaging
over both trials, the scroll wheel, the Slide Scroller,
and the Rate Scroller conditions were 28, 11, and 48 percent
slower. The Tukey Test at .05 significant level
indicates that the difference between mouse and scroll
wheel conditions, between the mouse and button scroller,
and between the mouse and slide scroller conditions were
not significant, while the difference between mouse and
rate scroller conditions was significant.
Figure
11 shows the subjects' ratings of the various scrol-
lers using a rating scale from Zhai et. al. [15]. Contrary to
the results of that previous study, the Tukey Test at .05
significant level indicates that the difference between ratings
of mouse and scroll wheel was not significant.
Subjects gave the mouse a significantly higher rating than
the slide scroller, while the difference between ratings of
mouse and button scroller and the difference between ratings
of between mouse and rate scroller were not
significant. Subjects gave the scroll wheel a significantly
higher rating than slide scroller and rate scroller, while the
difference between ratings of scroll wheel and button
scroller was not significant. The differences of ratings
among the three Pebbles scrollers were not significant.
Revised Scrolling Task
We were not happy with the performance of the scrollers
on the PDA, and the subjects provided useful feedback on
ways to improve them. Therefore, we performed iterative
design on the software, and tried the scrolling task again
with 12 new subjects. Figure 12 shows that we were able to
improve the performance of the new versions of the button
scroller, but the rate scroller may be worse. The new absolute
scroller was quite fast. The ratings of the new
versions are shown in Figure 13 and parallel the performance
6 9 .4
.3
sec
Mouse S cr ol Wheel B ut t onS cr ol er S deS cr ol er R ateS cr ol
l er AbsS cr ol er
Figure
12. Times in seconds to scroll through 10 pages in trials
2 and 3 of the second version of the web page scrolling task.
ous e Scr ol W hee B u tton
Scr ol er
de Scr ol er R ate Scr ol er Abs S cr ol er
Figure
13. Ratings of the various input methods by the subjects
in the second version of the scrolling experiment.
A repeated measure variance analysis showed that subjects
completion time was significantly affected by input method
Taking the Mouse condition as the
reference and averaging over both trials, the button scroller
was 8 percent faster but the Tukey Test at .05
significant level indicates that such difference is not sig-
nificant. The scroll wheel, the absolute scroller, the slide
scroller, and the rate scroller conditions were 31, 7, 12,
and 64 percent slower than the standard mouse condition.
The Tukey Test at .05 significant level indicates that the
difference between mouse and scroll wheel conditions and
the difference between mouse and rate scroller conditions
were significant, while the difference between mouse and
absolute scroller conditions and between Mouese and slide
scroller conditions were not significant.
great
very good
good
poor
very poor
terrible
great
very good
good
poor
very poor
terrible
DISCUSSION
Button Size
The subjects were able to hit buttons quite accurately with
their left hand, especially for small numbers of buttons.
The predicted decrease in performance with decreased
button size was observed. There seems to be a threshold of
about 12 buttons before there is any affect due to the size.
We believe that we achieved expert performance (the
learning curve flattened out) by the end of the experiment,
so we tried using the times in models of expert human
performance. One candidate is Fitts's law, but we do not
know exactly where the subjects' fingers were when they
started to move to tap. Assuming a movement of about 2
inches and a target size of 1 inch (in the 2x2 case), Fitts's
law as formulated in [5, p. 55] predicts a movement time
of about 150msec, compared to our measurement of
593msec. In our task, however, there is also perception
and thinking time. For the smaller buttons ( inch in the
predicts an increase in time of
about 100msec, but we saw an increase of about 275msec.
We observed that subjects looked back and forth from the
monitor to the PDA, at an increasing rate depending on
the number of buttons to choose from. Therefore, we believe
the performance cannot be modeled simply as a
Fitts's law task, but we were unable to find an appropriate
alternative model.
Our results showing that users can tap up to 12 buttons accurately
and quickly with the left hand is relevant since
there are a number of applications where having several
buttons on the PDA would be useful. Examples include
scrolling with buttons (Figure 3a and Figure 4a), and panels
created with the Shortcutter tool for controlling a
compiler, playing music on the PC, reading mail, etc. [10].
Homing Times
Our one-handed homing time to move from the mouse to
the keyboard (701 msec - see Figure 9) is longer than the
time to move from the PDA to the keyboard (639 msec).
This may be because the physical distance to the mouse
from the home position on the keyboard is longer (14
inches compared to 7 inches) due to the number-pad and
arrow keys sections of the PC keyboard. In the other direc-
tion, the increased time to acquire the PDA may be due to
the unfamiliarity of homing to this kind of device.
In the classic study of text selection devices [5, p. 237], the
homing time to move from the space bar to the mouse was
measured as 0.36 seconds. This was measured from
videotapes of subjects moving. An average homing time of
0.4 seconds was incorporated into the Keystroke Level
Model [4]. However, we measured one-handed homing
times of around 0.7 seconds, which is substantially longer.
Our time was measured from the time of the mouse click
to the time that the first keystroke was recognized. Our
typing test shows that the average time per keystroke was
seconds, so this might be subtracted from our measured
time to get the predicted 0.4 seconds.
An important observation is that, as predicted, subjects
moved both hands simultaneously, and this did not penalize
the movement time much. The sum of the one-handed
times to move from mouse and PDA to the keyboard is
1340msec (701+639). This is much larger than the time to
move from both mouse and PDA to the keyboard in the
two handed case which is 791 msec (1340msec is 69%
larger). A similar relationship holds for the movement
from the keyboard to the PDA and mouse
(728+7443=1473 > 838; 76% larger).
it takes only about 15% longer to acquire both the
mouse and the PDA than just to acquire the mouse, and it
takes only about 13% longer to get back to the keyboard
from both devices than from just the mouse.
We were not able to find any prior studies of the time to
acquire two devices at the same time. Most studies of two-handed
use of input devices (including our button-size and
scrolling tasks) allow the subjects to stay homed on the de-
vices. We found that moving both hands slowed down each
hand a little, but there was substantial parallel movement.
Realistic tasks are likely to include a mix of keyboard and
other input device use, so homing issues may be important.
Scrolling
Our measured times for scrolling the web pages with the
mouse (about 60 seconds) is a little faster than the time
reported in [15], and in the revised web task, the time for
scrolling with the button scroller is 45.9 sec (average of
trial 2 and trial 3) which is faster than the time for scrolling
with the in-keyboard isometric joystick (around 50
sec). This shows that using the PDA can match or beat the
speed of other non-dominant hand devices.
An interesting comparison is between their joystick, our
Rate Scroller (Figure 3c) and the scroll wheel used in its
most popular manner as a rate-controlled scroller. All provide
the same rate-controlled style of scrolling, but they
have significantly different performances and ratings by
users. Our attempt to improve the rate scroller obviously
did not help, showing that further work is needed to make
this scrolling method effective. We observed that the fast
speed was much too fast, but the medium speed was too
slow. The popularity of the scroll wheel and the success of
the pointing stick give us reason to keep trying. Further-
more, IBM did significant experimentation and
adjustments before the pointing stick had acceptable performance
[13]. Therefore, an important conclusion from
the scrolling experiment is that the specific design and
pragmatics of the input methods has a very important influence
on the performance.
Another interesting result is that our subjects quite liked
the scroll wheel (average rating of 1.7 | very good),
whereas in the earlier study it was rated much worse (-1 |
poor) [15]. This may be due to the increased experience
people have with a scroll wheel (many of our subjects have
a scroll wheel on their own mouse), and because most of
our subjects used it in its rate-controlled joystick mode,
whereas most of the earlier study's subjects used the rolling
mode.
An interesting observation about this Web scrolling task in
general is that it primarily tests scrolling while searching
for information, so the scrolling must go slow enough so
the subjects can see the content go by. This is why the
methods that provided the best control over the speed are
preferred. The low rating of the rate scroller on the PDA is
because the fastest speed was much too fast to see the text
go by, and the medium and slow speeds were rated as too
slow. However, other scrolling tasks, such as those tested
by [3], require the user to go to a known place in the
document, and then a method that can move long distances
very quickly may be desirable.
CONCLUSIONS
Many studies have shown the effectiveness of two-handed
input to computers in certain tasks. One hindrance to two-handed
applications has been that there may be only a few
tasks in which using both hands is beneficial, and the
benefits are relatively minor. Another problem is that although
it is interesting to study custom devices for use by
the non-dominant hand, in order for there to be wide-scale
use, it is better to provide mechanisms that users can easily
get and configure. Since increasing numbers of people
have PDAs that are easy to connect to PCs, it makes sense
to see if PDAs can be used effectively in the non-dominant
hand. The research presented here shows that PDAs can be
used as buttons and scrollers, and that the time to home to
two devices is only slightly longer than for one. Our study
of one application shows that at least for the scrolling task,
a PDA can match or beat other 1-handed and 2-handed
techniques. Because there is no incremental cost for the
PDA since users already own it, and since the PDA is connected
to the PC anyway, even small efficiencies many be
sufficient to motivate its use as a device for the non-dominant
hand. Our studies and many others have emphasized
the importance of the pragmatics and the exact
behavior of controls. Because the PDA can be programmed
with a variety of controls with various properties, further
research is required to determine the most effective ways
that a PDA can be used to control the PC in both the
dominant and non-dominant hand.
ACKNOWLEDGMENTS
For help with this paper, we would like to thank Rob Miller,
Bernita Myers and Shumin Zhai.
The research reported here is supported by grants from DARPA, Microsoft,
IBM and 3Com. This research was performed in part in connection with
Contract number DAAD17-99-C-0061 with the U.S. Army Research Labo-
ratory. The views and conclusions contained in this document are those of the
authors and should not be interpreted as presenting the official policies or
position, either expressed or implied, of the U.S. Army Research Laboratory
or the U.S. Government unless so designated by other authorized documents.
Citation of manufacturer's or trade names does not constitute an official endorsement
or approval of the use thereof.
--R
"Exploring Bimanual Camera Control and Object Manipulation in 3D Graphics Inter- faces,"
"Lexical and Pragmatic Considerations of Input Structures."
Myers B. "A. Study in Two-Handed Input,"
"The Keystroke-Level Model for User Performance Time with Interactive Sys- tems."
The Psychology of Human-Computer Interaction
"Asymmetric Divison of Labor in Human Skilled Bimanual Action: The Kinematic Chain as a Model."
"Two-Handed Virtual Manipulation."
"Integrality and Separability of Input De- vices."
"The Design of a GUI Paradigm based on Tablets, Two-hands, and Transparency,"
"Individual Use of Hand-Held and Desktop Computers Simultaneously,"
"Collaboration Using Multiple PDAs Connected to a PC,"
"A Multiple Device Approach for Supporting Whiteboard-based Interactions,"
"In-Keyboard Analog Pointing Device: A Case for the Pointing Stick,"
"Speed Typing Test 1.0. Available from http://hometown.aol.com/tokfiles/typetest.html. Test- <Email>edOK@aol.com</Email>,"
"Improving Browsing Performance: A Study of Four Input Devices for Scrolling and Pointing,"
--TR
A study in two-handed input
Integrality and separability of input devices
The design of a GUI paradigm based on tablets, two-hands, and transparency
The PadMouse
A multiple device approach for supporting whiteboard-based interactions
Collaboration using multiple PDAs connected to a PC
Two-handed virtual manipulation
Exploring bimanual camera control and object manipulation in 3D graphics interfaces
The keystroke-level model for user performance time with interactive systems
The Psychology of Human-Computer Interaction
Improving Browsing Performance
--CTR
Brad A. Myers, The pebbles project: using PCs and hand-held computers together, CHI '00 extended abstracts on Human factors in computing systems, April 01-06, 2000, The Hague, The Netherlands
Masanori Sugimoto , Keiichi Hiroki, HybridTouch: an intuitive manipulation technique for PDAs using their front and rear surfaces, Proceedings of the 8th conference on Human-computer interaction with mobile devices and services, September 12-15, 2006, Helsinki, Finland
Brad A. Myers, Using handhelds and PCs together, Communications of the ACM, v.44 n.11, p.34-41, Nov. 2001
Ka-Ping Yee, Two-handed interaction on a tablet display, CHI '04 extended abstracts on Human factors in computing systems, April 24-29, 2004, Vienna, Austria
Brad A. Myers , Robert C. Miller , Benjamin Bostwick , Carl Evankovich, Extending the windows desktop interface with connected handheld computers, Proceedings of the 4th conference on USENIX Windows Systems Symposium, p.8-8, August 03-04, 2000, Seattle, Washington
Ivn E. Gonzlez , Jacob O. Wobbrock , Duen Horng Chau , Andrew Faulring , Brad A. Myers, Eyes on the road, hands on the wheel: thumb-based interaction techniques for input on steering wheels, Proceedings of Graphics Interface 2007, May 28-30, 2007, Montreal, Canada
Gilles Bailly , Laurence Nigay , David Auber, 2M: un espace de conception pour i'interaction bi-manuelle, Proceedings of the 2nd French-speaking conference on Mobility and uibquity computing, May 31-June 03, 2005, Grenoble, France
James R. Miller , Serhan Yengulalp , Patrick L. Sterner, A framework for collaborative control of applications, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico | ubiquitous computing;two-handed input;hand-held computers;Windows CE;personal digital assistant PDAs;palm pilot;pebbles;smart environments |
332540 | Reconstructing distances in physical maps of chromosomes with nonoverlapping probes. | We present a new method for reconstructing the distances between probes in physical maps of chromosomes constructed by hybridizing pairs of clones under the so-called sampling-without-replacement protocol. In this protocol, which is simple, inexpensive, and has been used to successfully map several organisms, equal-length clones are hybridized against a clone-subset called the probes. The probes are chosen by a sequential process that is designed to generate a pairwise-nonoverlapping subset of the clones. We derive a likelihood function on probe spacings and orders for this protocol under a natural model of hybridization error, and describe how to reconstruct the most likely spacing for a given order under this objective using continuous optimization. The approach is tested on simulated data and real data from chromosome VI of Aspergillus nidulans. On simulated data we recover the true order and close to the true spacing; on the real data, for which the true order and spacing is unknown, we recover a probe order differing significantly from the published one. To our knowledge this is the first practical approach for computing a globally-optimal maximum-likelihood reconstruction of interprobe distances from clone-probe hybridization data. | Introduction
Physical mapping in molecular biology is the task of reconstructing
the order and location of features of biological
interest along a chromosome. The features may be
Corresponding author. Department of Computer Science, University
of Georgia, Athens, GA 30602-7404. Email: kece@cs.uga.edu.
Research supported by National Science Foundation CAREER Award
DBI-9722339.
y Department of Statistics, University of Georgia, Athens, GA
30602. Email: sanjay@stat.uga.edu
z Department of Genetics, University of Georgia, Athens, GA
30602. Email: arnold@genetics.uga.edu
sites at which restriction enzymes cut, so-called sequence-
tagged sites that are identified by short, uniquely-occurring
sequences, or positions of clones that contain fragments of
the chromosome. There is a diverse array of approaches for
constructing maps of such features depending on the type
of data that is collected, including mapping by nonunique
probes [2, 18], mapping by unique probes [1, 11, 12], mapping
by unique endprobes [7], mapping by nonoverlapping
probes [8], mapping from restriction-fragment length
data [10, 13], radiation-hybrid mapping [24, 5], and optical
mapping [21, 14, 16]; there are many probabilistic analyses
of various approaches [15, 4, 28, 27, 26]; and a wide
variety of computational techniques have been employed or
suggested, including greedy algorithms [18], simulated annealing
[20, 25, 2, 1], linear programming [7, 12, 8], and
semidefinite programming [6].
In this paper we develop a maximum-likelihood approach
for a type of physical mapping known as the sampling-
without-replacement protocol. The protocol is inexpen-
sive, simple to carry out in the lab, and uses widely-available
technology. Organisms that have been mapped
with this technique include Schizosaccharomyces pombe [19],
Aspergillus nidulans [22], and Pneumocystis carinii [3];
mapping projects in progress using the technique include
Neurospora crassa and Aspergillus flavus.
In the protocol, a library L consisting of many overlapping
clones that each sample a fragment of the chromosome
is developed. Clones in L are size-selected to have a target
length, and are arrayed on a plate. A subset of the clones
called the probe set P is then obtained by the following sequential
process. Initially, L. At the ith
iteration of the process, choose a clone P i from S at random,
remove P i from S, and add it to P. Hybridize P i against
all the clones in the library by extracting complementary
DNA from both of its ends and washing the DNA over the
arrayed plate, recording all clones in the library to which the
DNA sticks. Remove from S all clones in the library that
have a positive hybridization result with P i . Then repeat
this process for the next iteration, stopping once S becomes
empty.
We call the final set P the probe set, and the set
the clone set. The results of the experiments are summarized
in a probe-clone hybridization matrix H that records the
outcomes of all hybridizations between the probes in P and
the clones in C.
Notice that if a clone C overlaps with a probe
P in the chromosome, it must overlap with one of the ends
of P j , as all probes and clones are of the same length. Such
an overlap corresponds to a portion of DNA that is in common
between the clone and the end of the probe. In the absence
of error, the complementary DNA from the end of P j
will stick to C i , and the hybridization test of P j versus C i
will be a positive result; thus clone C i will be removed from
set S at the jth iteration. This implies that in the absence
of error the probe set P is a maximal nonoverlapping subset
of the library.
Suppose that in hybridization matrix H enough of the
clone-probe overlap structure is represented that we can recover
the order of the probes P across the chromosome.
Then for every consecutive pair of probes P and Q in this or-
der, we can examine H for the presence of a linking clone C
that overlaps with both P and Q. The probe set P together
with a linking clone for every consecutive pair forms a minimal
set of clones that cover the chromosome. A map giving
the order of the probes across the chromosome is then very
useful, since by individually sequencing just the probes and
linking clones and overlapping the sequences in the order
given by the map, we can reconstruct the DNA sequence of
the chromosome.
In reality, hybridization tests do not perfectly record the
overlap structure of probes and clones. Hybridization results
contain random false positives and false negatives. A
probe can also hybridize to a nonoverlapping clone due to
repeated DNA in the chromosome. In general, clones can
be chimeric, which means they sample two or more fragments
of the chromosome, and can contain deletions, which
happens when portions of the DNA get spliced out during
cloning. In the mapping projects using this protocol at the
University of Georgia, however, clones are produced by cos-
mids, which are small enough that chimerism and deletions
are not a significant problem. In our treatment we model
false positives and false negatives, but not chimerism, dele-
tions, or repeats. Hence false hybridizations due to repeats
are treated as a series of isolated false positives.
Related work Prior work on mapping by the sampling-
without-replacement protocol, by Cuticchia, Arnold and
Timberlake [9], Wang, Prade, Griffith, Timberlake and
Arnold [25], and Mott, Grigoriev, Maier, Hoheisel and
Lehrach [20], has largely used local-search heuristics such as
simulated annealing to try to find a probe order that minimizes
the Hamming-distance traveling-salesman objective.
While minimizing this objective is not known to optimize
any natural measure of the goodness of a map, Xiong, Chen,
Prade, Wang, Griffith, Timberlake and Arnold [27] have
shown that under certain assumptions on the distribution
of clones, the Hamming-distance objective is statistically
consistent; this means that as the number of clones goes
to infinity, an exact algorithm for the Hamming-distance
traveling salesman problem would recover the correct probe
order with probability one.
Christof and Kececioglu [8] recently showed that the
problem of computing a maximum-likehood probe order in
the sampling-without-replacement protocol in the presence
of false-positive and -negative hybridization error can be reduced
to the problem of finding the minimum number of
ones to change to zeroes in hybridization matrix H so that
the resulting matrix H 0 has at most 2 ones per row and
the consecutive-ones property on rows. They then showed
how to formulate this problem as an integer linear program,
and developed a branch-and-cut algorithm for computing
an optimal maximum-likelihood probe order. Using this ap-
proach, they were able to compute optimal probe orders for
realistic-sized instances on simulated data, and probe orders
with significantly fewer false positives on real data than
the best-possible map obtainable by a Hamming-distance
traveling-salesman approach. In this paper we complement
the work in [8] by developing a practical method for computing
a globally-optimal maximum-likelihood reconstruction
of the interprobe distances, given a probe order.
Plan of the paper In the next section we give a maximum
likelihood formulation of the problem of mapping by
the sampling-without-replacement protocol in the presence
of false positive and false negative error, which we call Mapping
by Nonoverlapping Probes. The problem is unique in
that the goal is to reconstruct the most likely order and spacing
of probes along the map from the hybridization data.
Section 3 then derives the likelihood function on probe orders
and spacings for this formulation, which has a remarkably
simple closed form. Section 4 explains how we tackle
the maximization of this function for a fixed probe order
using continuous optimization. Section 5 presents results
of some experiments with a preliminary implementation of
this approach. We then conclude with several directions for
further research.
2 The problem
In our maximum likelihood formulation we do not model
the sequential process of choosing the probes, and hence
we operate under the assumption that the probes form a
nonoverlapping set. We write fP1 Png for the set of
for the set of m clones, and we
formulate the problem as follows.
The task is to recover the probe order
and the probe spacing as illustrated in
Figure
1, given the m \Theta n clone-probe hybridization matrix
containing false positive and false negative errors.
Permutation - gives the names of the probes in left-to-right
order across the chromosome. Vector x gives the distance
beween consecutive probes, where component x j is the distance
between the left end of P- j and the right end of P- j\Gamma1 .
Matrix is a 0-1 matrix, with
ae
hybridizes to C
0; otherwise:
We assume that all clones are the same length, that the
probes are nonoverlapping, and that we know
ffl L, the length of the chromosome,
ffl ', the length of a clone,
ffl ae, the probablility that an entry of H has been
corrupted into a false positive, and
ffl j, the probability that an entry of H is a false negative
As stated, this is not a well-posed problem. In the
presence of false positives and negatives, any permutation
- of ng and any positive vector x for which
are an explanation of the data. To obtain
a well-defined problem, we invoke the principle of maximum
likelihood, which says that the best reconstructed map
is that - and x that are most likely to have given rise to H.
If we write p(-; x the probability that - and x are
the true order and spacing given the observed matrix H,
a maximum likelihood reconstruction is a pair (- ; x ) that
maximizes p(-; x j H). We take the following as our definition
of the problem.
(Mapping by Nonoverlapping Probes)
The Mapping by Nonoverlapping Probes Problem is the fol-
lowing. The input is the clone-probe hybridization matrix
H, the chromosome length L, the clone length ', the
false positive probability ae, and the false negative probability
j. The output is a probe order and probe spacing
pair (-; x) that maximize p(-; x under the assumption
that the probes are a collection of nonoverlapping clones, all
clones are of equal length, that the left ends of clones are
uniformly distributed across the chromosome, and that the
entries of H have been independently corrupted with false
positive probability ae and false negative probability j. 2
We can derive the function p(-; x using Bayes' theorem
In this equation, p(H j -; x) is the probability of observing
H given that - and x are the true order and spac-
ing, p(-; x) is the probability that - and x occur in na-
ture, and p(H) is the probability of observing H. 1 Since
e-
ex p(H j e-; e x) p(e-; ex)dex, the denominator is a
constant independent of - and x and can be ignored. Since
the names given to probes and the spaces between probes
are independent, p(-; names are assigned
to probes completely randomly,
is independent of - and can also be ignored. Thus the only
relevant quantities are p(H j -; x) and p(x).
If the probability density function p(x) on probe spacings
is uniform, this factor can be ignored as well. For the
model considered below, we do not yet know the density
function p(x), but it does not appear to be uniform. We
concentrate instead on deriving the function p(H j -; x),
and take maximizing it as our objective. This will differ
from truly maximizing p(-; x according to the bias due
to p(x).
We next derive function p(H j -; x) under the simplest
process by which H can be generated from - and x
with false positives and negatives. This process has three
stages:
(1) each clone is thrown down uniformly and independently
across the chromosome,
(2) for the row of the hybridization matrix corresponding
to a given clone, the probes that a clone
overlaps get a one in their column, and zeros are
placed everywhere else, and
(3) the ones and zeros are corrupted randomly and
independently with probability j and ae respectively
3 The objective function
To derive p(H j -; x) under this model, notice that each row
of H is independent of the other rows, since each clone is
thrown down independently and each entry is independently
corrupted. Writing H i for the ith row of H then, it suffices
are values taken on by underlying random
variables \Pi and X, when we write p(-; x) this is shorthand
for x). Furthermore, since - is a discrete variable while
x is a continuous variable, when we write p(-; x) this is the joint probability
density function of a discrete and a continuous random variable
evaluated at - and x.
to work out p(H i j -; x), since
Y
1-i-m
To derive p(H i j -; x), notice that in the absence of error
there are only three possible types of overlaps that can occur
with a given clone C i as illustrated in Figure 2:
(1) Clone C i overlaps with no probe. If the left end
of clone C i falls between the left ends of probes
P- overlaps with neither P- j
nor P-
. (If C i falls to the
left of P- 1 but does not overlap with it, we write
1 , and if C i falls to the right of P-n but
does not overlap with it, we write C i 2 N -
.)
(2) Clone C i overlaps with exactly one probe. If it
overlaps with only probe P- j
, we write C
.
overlaps with exactly two probes. If
it overlaps with both probe P- j and P- j+1 , we
.
In
Appendix
A, we derive p(H i j -; x) by summing over
the disjoint events C i 2 N -
j . For
here, note that the domain S of the probe order permutation
- is the set of all permutations on f1; and the
domain D ' R n of the spacing vector x is the set
1-i-n
We summarize the derivation in the following theorem.
Theorem 1 (Objective function) For hybridization matrix
1-i-m
a -
where the coefficients a -
ij are given by Equations
(2) through (5) in the Appendix, and we define
1-i-n
Then for a fixed probe order -,
where D is given by Equation (1). 2
In other words, if we can evaluate the following objective
function on permutations,
(and recover the minimizing x for a given -), we can reduce
the continuous problem of maximizing p(H j -; x) to a discrete
search for a permutation that minimizes g(-). 2 We
now describe how we tackle the evaluation of g(-).
2 Note that this does not solve the problem of finding a pair (-; x)
that maximizes p(-; x j H): the objective f(-; x) is missing a term
of \Gamma ln p(x), as we do not know the density function p(x).
Figure
1 The problem is to reconstruct the probe order permutation and the probe spacing vector
from the clone-probe hybridization matrix H. The probe set fP1 ; Png is chosen to form a non-overlapping
subset of the clones. Clones are size-selected to have the same length.
(a)
(b)
(c)
Figure
2 The three possible types of clone-probe overlaps. (a) C i 2 N -
.
4 Evaluating the objective for a fixed permutation
In this section, for a fixed - let us we write f(x) for f(-; x),
and define
Then
1-i-m
Below we show that f is convex in certain convex regions
of D, so that a greedy procedure such as gradient descent
will find the global minimum of f in such a region. We
describe how we choose these regions of D, and then explain
how to find the direction of greatest decrease in f in such
a constrained region for the gradient descent procedure. A
very readable summary of the facts from optimization that
we use is given by Lengauer [17].
4.1 Convexity
Recall that a set C ' R n is a convex set if for all points p
and q in C and all 0 - 1, the point -p + (1\Gamma-)q is in C.
A function R defined on a convex set C is a convex
function if for all points p and q in C and all 0 - 1,
Informally, a convex function is bowl-shaped.
Let us call a region C ' D good if for all points x 2 C
and all 1 defined as in
Theorem 1. The relevance of good regions is that they are
the regions throughout which f(x) is differentiable.
In a good region C consider all points
which is the ray traced by moving from point p 2 C in
direction Along such a ray the derivative
of f is well-defined and is equal to
d
ds
d
ds
where
d
ds
1-j-n
and where u(\Delta) denotes a unit step function at ':
Taking a second derivative along the ray yields
ds
so that
ds
1-i-m
d
ds f i (x)
0:
This implies that in every convex region C ' D that is good,
function f is convex.
A key property of convex functions is that a local minimum
of a convex function f in a convex set C is a global
minimum of f on C [17]. Thus if we can divide D into a
small number of good convex regions, it suffices to apply in
each region an algorithm that is only guaranteed to find a
local minimum; the best of these local minima is the global
minimum of f over the regions.
and consider the four regions D+1+1 ,
D \Gamma1\Gamma1 . These regions correspond to constraining all interior
distances between probes to be at most ', and then
forcing the exterior distances x1 and xn+1 to be on one side
of '. Each region is an intersection of halfspaces, and hence
is a convex set. The interior of each is a good region, and
for any ray originating in the interior we can make the appropriate
choice for the derivative at the boundary so that
the derivative along the ray is continuous throughout the
region. Thus we can find the global minimum in each of
these four regions by gradient descent as described below.
This does not necessarily find the global minimum of f
on D. However, notice that for our function f , if a spacing
vector x is modified by trading distance between two components
in such a way that both remain
at least ', the value of f is unchanged. Suppose then that
the global optimum x over D has x
and x
component. By shrinking x
i to '
while stretching the larger of x
1 or x
n+1 , we can eventually
transform x into a point in one of the four regions without
changing its value under f . Thus the best of the minima of
the four regions, call it ex, is not a global minimum over D
only if for all global minima x over D, x
and in some other component x
stretching x
1 or x
n+1 as before shows that suboptimality
of ex is due only to error in ex 1 or ex n+1 . However, as there
are no linking clones by which to estimate e x1 and ex n+1 , the
hybridization data provides no direct information by which
to reconstruct these two exterior distances, and their estimates
should be regarded with suspicion in any reconstruc-
tion. Thus, if the biologist interprets the output e
x with the
understanding that when ex some component, this
distance may exceed ' in the true map, and that e x1 and ex n+1
may be inaccurate, then reporting the global optimum ex of
the four regions is reasonable.
4.2 Gradient descent
The gradient of f at point p is the vector
grad f(p) :=
@xn f(p)
where the kth component of the gradient is the partial
derivative of f with respect to xk evaluated at
@
where u(\Delta) is the unit step function defined before and pn+1
is defined in the same way as xn+1 . A basic fact in multivariable
calculus is that the direction of greatest decrease
of f at p is
The procedure known as gradient descent [23] starts from
a point p, computes the negative gradient direction v at p,
moves to the point p 0 that minimizes f along the ray p+sv,
and repeats, stopping once a point is reached at which the
gradient vanishes. In the unconstrained problem of minimizing
f over R n , such a point is a local minimum, and
since f is convex, when gradient descent halts it has found
a global minimum of the unconstrained problem.
For the constrained problem, however, of minimizing f
over a region C ' R n , the negative gradient direction v at a
point p on the boundary of C may be directed outside C, in
which case we cannot move along v, yet another direction v 0
at p that is directed inside C may exist along which f de-
creases, albeit at a slower rate. Let us call a direction v at a
point it is possible move along v from p and
remain in C. In general, the feasible direction v of greatest
decrease in f at a point p can be found as follows.
The boundaries of a region are given by constraints
that are hyperplanes. At point the
negative gradient direction
which of the bounding hyperplanes are tight. Let the list of
tight hyperplanes for which v points outside the halfspace
given by the hyperplane be H1 , ., Hk . Take v
and successively project v (0) onto H1 to obtain v (1) , then
project v (1) onto H2 to obtain v (2) , and so on. The vector
resulting from the final projection onto Hk is the
feasible direction of greatest decrease at p. If v
then p is a local minimum of f in C.
Given the feasible direction v of greatest decrease, we
compute the largest value t ? 0 for which
As f is convex, the one-dimensional problem of minimizing f
along p+ sv for s 2 [0; t] can be solved by a form of binary
search known as bisection [23].
This completes the description of our approach to evaluating
g(-). Over each of the four regions
we compute a global minimum by constrained gradient descent
using bisection, and take the best of the four minima.
Computing the gradient at a given point takes time \Theta(mn),
which dominates the time to find the best feasible direction
by successive projection, and is also the time to compute
derivatives at each step during bisection. As reaching a local
minimum can involve several gradient descent iterations,
and each iteration can involve several bisection steps, the
entire procedure is expensive. To find a good - we use the
local-search heuristic known as simulated annealing, calling
the above procedure to evaluate g(-) on each candidate
probe order.
5 Preliminary results
We now present some very preliminary results with an
implementation of this approach written by the second author
In the first experiments we ran the implementation
on simulated data. For our parameters we picked values
identical to those for chromosome VI of the fungus
Aspergillus nidulans, which has been mapped using the
sampling-without-replacement protocol [22]. This involved
clone length of
and a chromosome length of which corresponds
to a coverage of nearly 13. A false positive and false negative
probability of were used, which are
the estimated error rates for the mapping project. Clones
were thrown at random across the chromosome with the uniform
distribution, a probe set of nonoverlapping clones was
chosen, and the corresponding hybridization matrix H with
false positives and false negatives was generated.
We first tested how well the approach recovered the true
spacing, which was known for the simulated data, by running
the constrained gradient descent procedure with the
true probe order -. This is summarized in Table 1 for the
gradient descent started from a completely uniform initial
spacing, and an initial spacing obtained by a linear programming
approximation (which will be described in the full pa-
per). The hope was that a more sophisticated method for
choosing an initial spacing would lead to faster convergence
to a local minimum. As Table 1 shows, this was not the
case. Starting from a uniform spacing took fewer iterations
of gradient descent, and fewer total bisection steps. It is
interesting that both approaches found a final spacing with
better likelihood than the true spacing, which had a value
of 6649.32.
As a measure of the error between the true spacing
and the computed spacings, we used the root-mean-square
error (RMS). Interestingly, the linear programming spacing
had greater initial error because the two exterior distances
x1 and xn+1 were not well-estimated from the hybridization
data, and the uniform spacing happened to give
better estimates for these exterior distances. The computation
time using either initial spacing was around 5 minutes
on a Sun UltraSPARC 1 with a 167 MHz chip. The final
RMS error of 3.7 kb is roughly 9% of the clone length.
Clearly there is a limit to the accuracy to which one
can recover the true spacing from the discrete data of a hybridization
matrix, which is essentially giving counts of linking
clones. We can show that every method of recovering
spacings must in the worst case have a root-mean-square error
of at least
'). For the above data, ffl - 6:2 kb.
In comparison, the final error in Table 1 is around 60% of
this worst-case lower bound.
Next we tested how well the simulated annealing approach
combined with this procedure for evaluating f recovered
the true probe order. We started from an initial -
obtained by a greedy heuristic for the Hamming-distance
traveling salesman objective. This initial - had 6 break-points
with respect to the true -, and an initial likelihood
of 6728.45. After about 12 hours on the above machine the
simulated annealing procedure halted with a final - equal
to the true order, with a final likelihood of 6470.52.
In the second experiments we ran the implementation on
real mapping data from chromosome VI of Aspergillus nidu-
lans, which took around 12 hours on the above machine.
The computed probe order had 36 breakpoints with respect
to the published order [22], which was obtained using simulated
annealing on the Hamming-distance traveling salesman
objective [25]. While our computed order clearly had
little in common with the published order, for this mapping
data the true order is not known.
6 Conclusion
We have presented a new maximum-likelihood approach for
reconstructing the distances between probes for physical
maps constructed by hybridizing equal-sized clones against
a nonoverlapping clone-subset. This protocol has been used
to successfully map several organisms, and yields a model
whose likelihood function is sufficiently simple to permit a
closed-form expression. The resulting formulation gives to
our knowledge the first practical method for physical mapping
from hybridization data that can reconstruct globally-
optimal maximum-likelihood distances along maps.
Table
1 Recovering the spacing on data simulating chromosome VI of Aspergillus nidulans.
LP-based initial spacing Uniform initial spacing
Bisection steps 185 177
Gradient descent iterations 149 101
Initial RMS error 6.65 kb 5.66 kb
Final RMS error 3.78 kb 3.69 kb
Final likelihood 6610.41 6610.53
Further research Finding a provably optimal - under
the objective appears formidable
given that f is nonlinear, while attempting to find a
good - through simulated annealing started from a random
appears slow given that g(-) is expensive to eval-
uate. The following two-stage approach may be effective,
however:
(1) Use a combinatorial approach with guaranteed
performance to find an initial e- that optimizes a
simpler linear combinatorial objective eg(-).
(2) Polish e- under the original nonlinear objective g
by local search to obtain a final - and spacing
x .
For example, eg could be the combinatorial 2-consecutive-
ones objective of Christof and Kececioglu [8], which corresponds
to the same likelihood model but without probe
spacings. In fact, if eg is sufficiently accurate to recover an
acceptable e-, one might use the original objective f to simply
recover the best spacing for e-. We suspect that the
full f is not needed to recover the true probe order in prac-
tice, and that the real utility of our likelihood function f
will be to infer probe spacings for probe orders computed
by combinatorial methods.
The numerical techniques we used to compute x 2
argmin x2D f(-; x), namely gradient descent with bisection,
are elementary, and it would be interesting to investigate
whether convergence to x can be sped up by more sophisticated
numerical techniques.
In taking f(-; x) as our objective, which is equivalent
to maximizing p(H j -; x), not p(-; x j H), we are implicitly
assuming that the a priori probability density function
on probe spacings, p(x), is uniform. Unfortunately, even
when the distribution of the left ends of clones is uniform,
the density function on probe spacings is not. It would be
interesting to work out the a priori probe spacing density
function under a natural model of clone placement (which
appears to be involved), and investigate whether its inclusion
in the likelihood objective improves recovery of the true
spacing.
Finally, a significant source of error not considered in
our model is repeated DNA. When the chromosome contains
a repeat R that happens to occur at the end of a
probe P , the probe will have a false-positive hybridization
with every clone that does not overlap P but contains the
same repeat R. Examination of the hybridization matrix for
chromosome VI of Aspergillus nidulans shows that the false
positives do not appear to occur completely independently
across the matrix, but appear to occur more frequently in
certain columns. This suggests that repeats may be present.
How to best incorporate repeats into the maximum likelihood
objective is an interesting open problem, as it is not
clear how to appropriately model both the number of repeat
families and the number of copies in a family.
--R
"Physical mapping of chromosomes using unique probes."
"Physical mapping of chromosomes: A combinatorial problem in molecular biology."
"Constructing a physical map of the Pneumocystis genome."
"Genomic mapping by anchoring random clones: A mathematical analysis."
"On constructing radiation hybrid maps."
"A geometric approach to be- tweenness."
"A branch-and-cut approach to physical mapping of chromosomes by unique end-probes."
"Computing physical maps of chromosomes with nonoverlapping probes by branch-and-cut."
"The use of simulated annealing in chromosome reconstruction experiments based on binary scoring."
"An algorithmic approach to multiple complete digest mapping."
"Physical mapping by STS hybridization: Algorithmic strategies and the challenge of software evaluation."
"Algorithms for computing and integrating physical maps using unique probes."
"Mapping clones with a given ordering or interleaving."
"Algorithms for optical mapping."
"Genomic mapping by fingerprinting random clones: A mathematical anal- ysis."
"Estima- tion for restriction sites observed by optical mapping using reversible-jump Markov chain Monte Carlo."
Combinatorial Algorithms for Integrated Circuit Layout.
"Construction of physical maps from oligonucleotide fingerprint data."
"A 13 kb resolution cosmid map of the 14 Mb fission yeast genome by nonrandom sequence-tagged site mapping."
"Algorithms and software tools for ordering clone libraries: Application to the mapping of the genome Schizosaccharomyces pombe."
"Towards constructing physical maps by optical mapping: An effective, simple, combinatorial approach."
"In vitro reconstruction of the Aspergillus nidulans genome."
Numerical Recipes in C.
"Building human genome maps with radiation hy- brids."
"A fast random cost algorithm for physical mapping."
"Be- yond islands: Runs in clone-probe matrices."
"On the consistency of a physical mapping method to reconstruct a chromosome in vitro."
"Genome mapping by nonrandom anchoring: A discrete theoretical analy- sis."
--TR
Combinatorial algorithms for integrated circuit layout
On constructing radiation hybrid maps (extended abstract)
An algorithmic approach to multiple complete digest mapping
Algorithms for computing and integrating physical maps using unique probes
Towards constructing physical maps by optical mapping (extended abstract)
Building human genome maps with radiation hybrids
Beyond islands (extended abstract)
Algorithms for optical mapping
Estimation for restriction sites observed by optical mapping using reversible-jump Markov chain Monte Carlo
Computing physical maps of chromosomes with nonoverlapping probes by branch-and-cut
Construction of physical maps from oligonucleotide fingerprints data
Mapping clones with a given ordering or interleaving
Physical mapping of chromosomes using unique probes
A Geometric Approach to Betweenness | maximum likelihood;physical mapping of chromosomes;sampling without replacement protocol;convex optimization;computational biology |
332877 | Design and Implementation of Efficient Message Scheduling for Controller Area Network. | AbstractThe Controller Area Network (CAN) is being widely used in real-time control applications such as automobiles, aircraft, and automated factories. In this paper, we present the mixed traffic scheduler (MTS) for CAN, which provides higher schedulability than fixed-priority schemes like deadline-monotonic (DM) while incurring less overhead than dynamic earliest-deadline (ED) scheduling. We also describe how MTS can be implemented on existing CAN network adapters such as Motorola's TouCAN. In previous work [1], [2], we had shown MTS to be far superior to DM in schedulability performance. In this paper, we present implementation overhead measurements showing that processing needed to support MTS consumes only about 5 to 6 percent of CPU time. Considering its schedulability advantage, this makes MTS ideal for use in control applications. | Introduction
Distributed real-time systems are being used increasingly in control applications such as in automobiles,
aircraft, robotics, and process control. These systems consist of multiple computational nodes, sensors, and
actuators interconnected by a LAN [3]. Of the multiple LAN protocols available for such use (including
MAP [4], TTP [5], etc.), the Controller Area Network (CAN) [6] has gained wide-spread acceptance in the
industry [7].
Control networks must carry both periodic and sporadic real-time messages, as well as non-real-time
messages. All these messages must be properly scheduled on the network so that real-time messages meet
their deadlines while co-existing with non-real-time messages (we limit the scope of this paper to scheduling
messages whose characteristics like deadline and period are known a priori). Previous work regarding
scheduling such messages on CAN includes [8, 9], but they focused on fixed-priority scheduling. Shin [10]
The work reported in this paper was supported in part by the NSF under Grants MIP-9203895 and DDM-9313222, and by the
ONR under Grant N00014-94-1-0229. Any opinions, findings, and conclusions or recommendations are those of the authors and
do not necessarily reflect the views of the funding agencies.
SOF Data CRC Ack EOF
SOF: Start of Frame
CRC: Cyclic Redundancy Code
EOF: End of Frame
Figure
1: Various fields in the CAN data frame.
considered earliest-deadline (ED) scheduling, but did not consider its high overhead which makes ED impractical
for CAN. In this paper, we present a scheduling scheme for CAN called the mixed traffic scheduler
(MTS) which increases schedulable utilization and performs better than fixed-priority schemes while incurring
less overhead than ED. This paper goes beyond the work presented in [1, 2] by removing some ideal
assumptions made in that previous work. We also describe how MTS can be implemented on existing CAN
network adapters. We address the problem of how to control priority inversion (low-priority message being
transmitted ahead of a higher-priority one) within CAN network adapters and evaluate different solutions
for this problem.
We measure various execution overheads associated with MTS by implementing it on a Motorola 68040
processor with the EMERALDS real-time operating system [11]. EMERALDS is an OS designed for use
in distributed, embedded control applications. For MTS's implementation, we use EMERALDS to provide
basic OS functionality such as interrupt handling and context switching. Using an emulated CAN network
device (another 68040 acting as a CAN network adapter and connected to the main node through a VME
bus), we present detailed measurements of all execution, interrupt handling, task scheduling, and context
switching overheads associated with MTS to show the feasibility of using MTS for control applications.
In the next section we give an overview of the CAN protocol. Section 3 describes the various types
of messages in our target application workload. They include both real-time and non-real-time messages.
Section 4 gives the MTS algorithm. Section 5 discusses issues related to implementation of MTS, focusing
on the priority inversion problem. Section 6 presents implementation overhead measurements. The paper
concludes with Section 7.
Controller Area Network (CAN)
The CAN specification defines the physical and data link layers (layers 1 and 2 in the ISO/OSI reference
model). Each CAN frame has seven fields as shown in Figure 1, but we are concerned only with the data
length (DL) and the identifier (ID) fields. The DL field is 4 bits wide and specifies the number of data bytes
in the data field, from 0 to 8. The ID field can be of two lengths: the standard format is 11-bits, whereas
the extended format is 29-bits. It controls both bus arbitration and message addressing, but we are interested
only in the former which is described next.
CAN makes use of a wired-OR (or wired-AND) bus to connect all the nodes (in the rest of the paper
we assume a wired-OR bus). When a processor has to send a message it first calculates the message ID
which may be based on the priority of the message. The ID for each message must be unique. Processors
pass their messages and associated IDs to their bus interface chips. The chips wait till the bus is idle, then
write the ID on the bus, one bit at a time, starting with the most significant bit. After writing each bit, each
chip waits long enough for signals to propagate along the bus, then it reads the bus. If a chip had written
a 0 but reads a 1, it means that another node has a message with a higher priority. If so, this node drops
out of contention. In the end, there is only one winner and it can use the bus. This can be thought of as a
distributed comparison of the IDs of all the messages on different nodes and the message with the highest
ID is selected for transmission.
3 Workload Characteristics
In control applications, some devices exchange periodic messages (such as motors and drives used in industrial
applications) while others are more event-driven (such as smart sensors). Moreover, operators may
need status information from various devices, thus generating messages which do not have timing con-
straints. So, we classify messages into three broad categories, (1) hard-deadline periodic messages, (2)
hard-deadline sporadic messages, and (3) non-real-time (best-effort) aperiodic messages. A periodic message
has multiple invocations, each one period apart (note that whenever we use the term message stream to
refer to a periodic, we are referring to all invocations of that periodic). Sporadic messages have a minimum
interarrival time (MIT) between invocations, while non-real-time messages are completely aperiodic, but
they do not have deadline constraints.
Low-Speed vs. High-Speed Real-Time Messages
Messages in a real-time control system can have a wide range of deadlines. For example, messages from a
controller to a high-speed drive may have deadlines of few hundreds of microseconds. On the other hand,
messages from devices such as temperature sensors can have deadlines of a few seconds because the physical
property being measured (temperature) changes very slowly. Thus, we further classify real-time messages
into two classes: high-speed and low-speed, depending on the tightness of their deadlines. As will be clear
in Section 4, the reason for this classification has to do with the number of bits required to represent the
deadlines of messages.
Note that "high-speed" is a relative term - relative to the tightest deadline D 0 in the workload. All
messages with the same order of magnitude deadlines as D 0 (or within one order of magnitude difference
from D 0 ) can be considered high-speed messages. All others will be low-speed.
4 The Mixed Traffic Scheduler
Fixed-priority deadline monotonic (DM) scheduling [12] can be used for CAN by setting each message's
ID to its unique priority as in [8, 9]. However, in general, fixed-priority schemes give lower utilization than
other schemes such as non-preemptive earliest-deadline 1 (ED). This is why several reaserchers have used
ED for network scheduling [15-17]. This motivates us to use ED to schedule messages on CAN, meaning
that the message ID must contain the message deadline (actually, the logical inverse of the deadline for a
bus). But as time progresses, absolute deadline values get larger and larger, and eventually they
will overflow the CAN ID. This problem can be solved by using some type of a wrap-around scheme (which
we present in Section 4.1) but even then, puting the deadline in the ID forces one to use the extended CAN
format with its 29-bit IDs. Compared to the standard CAN format with 11-bit IDs, this wastes 20-30%
bandwidth, negating any benefit obtained by going from fixed-priority to dynamic-priority scheduling. This
makes ED impractical for CAN.
In this section we present the MTS scheduler which combines ED and fixed-priority scheduling to
overcome the problems of ED.
4.1 Time Epochs
As already mentioned, using deadlines in the ID necessitates having some type of a wrap-around scheme.
We use a simple scheme which expresses message deadlines relative to a periodically increasing reference
called the start of epoch (SOE). The time between two consecutive SOEs is called the length of epoch, '.
Then, the deadline field for message i will be the logical inverse of d
is the
absolute deadline of message i and t is the current time (it is assumed that all nodes have synchronized
clocks [18]).
4.2 MTS
The idea behind MTS is to use ED for high-speed messages and DM for low-speed ones. First, we give
high-speed messages priority over low-speed and non-real-time ones by setting the most significant bit to 1
in the ID for high-speed messages (Figure 2a). This protects high-speed messages from all other types of
traffic. If the uniqueness field is to be 5 bits [2] (allowing high-speed messages), and the priority field
is 1 bit, then the remaining 5 bits are still not enough to encode the deadlines (relative to the latest SOE).
Our solution is to quantize time into regions and encode deadlines according to which region they fall in. To
distinguish messages whose deadlines fall in the same region, we use the DM-priority of a message as its
uniqueness code. This makes MTS a hierarchical scheduler. At the top level is ED: if the deadlines of two
messages can be distinguished after quantization, then the one with the earlier deadline has higher priority.
Non-preemptive scheduling under release time constraints is NP-hard in the strong sense [13]. However, Zhao and Ramamritham
[14] showed that ED performs better than other simple heuristics.
deadline DM priorityDM priority
fixed priority
(a)
(b)
(c)
Figure
2: Structure of the ID for MTS. Parts (a) through (c) show the IDs for high-speed, low-speed, and
non-real-time messages, respectively.
SOEl
end of epoch
Figure
3: Quantization of deadlines (relative to start of epoch) for
At the lower level is DM: if messages have deadlines in the same region, they will be scheduled by their
DM priority.
We can calculate length of a region (l r ) as l
is the longest relative deadline of any
high-speed message and m is the width of the deadline field (5 bits in this case). This is clear from Figure 3
(shown for 3). The worst-case situation occurs if a message with deadline D max is released just before
the end of epoch so that its absolute deadline lies ' +D max beyond the current SOE. The deadline field must
encode this time span using m bits leading to the above expression for l r .
We use DM scheduling for low-speed messages and fixed-priority scheduling for non-real-time ones,
with the latter being assigned priorities arbitrarily. The IDs for these messages are shown in Figures 2 (b)
and (c) respectively. The second-most significant bit gives low-speed messages higher priority than non-
real-time ones.
This scheme allows up to 32 different high-speed messages (periodic or sporadic), 512 low-speed messages
(periodic or sporadic), and 480 non-real-time messages 2 - which should be sufficient for most
applications.
4.3 ID Update Protocol
The IDs of all high-speed messages have to be updated at every SOE. Note that if ID updates on different
nodes do not coincide (almost) exactly, priority inversion can occur if the ID of a low-priority message is
updated before that of a high-priority one. Then, for a small window of time, the low-priority message will
consecutive zeros in the six most significant bits of the ID. This means that 32 codes for non-real-time messages
are illegal which leaves 512 \Gamma codes.
have a higher priority ID than the high-priority message. To avoid this problem, we must use an agreement
protocol to trigger the ID update on all nodes. The CAN clock synchronization algorithm [18] synchronizes
clocks to within 20-s. A simple agreement protocol can be that one node is designated to broadcast a
message on the CAN bus. This message will be received by all nodes at the same time (because of the
nature of the CAN bus) and upon receiving this special message, all nodes will update the IDs of their
local messages. But this protocol has two disadvantages. First of all, too much CAN bandwidth is wasted
transmitting the extra message every ' seconds. Moreover, a separate protocol must be run to elect a new
leader in case the old leader fails. Instead, we use the following protocol which is not only robust but also
consumes less bandwidth. Each node has a periodic timer which fires every ' seconds at which time the
node takes the following actions:
1. Set a flag to inform the CAN device driver that the ID update protocol has begun.
2. Configure the CAN network adapter to receive all messages (i.e., enter promiscuous mode by adjusting
the receive filter).
3. Increment the data length (DL) field of the highest-priority ready message on that node.
The first incremented-DL message to be sent on the CAN bus will serve as a signal to all nodes to update
the IDs of their messages. If the original DL of the message is less than 8, then incrementing the DL will
result in transmission of one extra data byte (device drivers on receiving nodes strip this extra byte before
forwarding the message to the application as described later). If the DL is already 8, CAN adapters allow
the 4-bit DL field to be set to 9 (or higher) but only 8 data bytes are transmitted.
Now, each node starts receiving all messages transmitted on the CAN bus. The device driver on each
node has a table listing the IDs of all message streams in the system along with their data lengths. As
messages arrive, the device driver compares their DL field to the values in this table until it finds a message
with an incremented DL field. All nodes receive this message at the same time and they all take the following
actions:
1. Restore the receive filter to re-enable message filtering in the NA.
2. If the local message whose DL field was incremented by the periodic timer has not been transmitted
yet, then decrement the DL field back to its original value.
3. Update message IDs to reflect the new SOE.
Each node receives the incremented-DL message at the same time, so the ID update on each node starts at
the same time. After the first incremented-DL message completes, the next-highest-priority message begins
transmission. As long as all nodes complete their ID updates before this message completes (a window of
at least 55-s since this message contains at least one data byte), all messages will have updated IDs by the
time the next bus arbitration round begins and no priority inversion will occur. In case one or more nodes are
slow and cannot complete the ID update within this window of time, all nodes can be configured to do the
update while the n th message after the first incremented-DL message is in transmission, where n is a small
number large enough to allow the slowest node to calculate all new IDs and then just write these to the NA
while the n th message is in transmission.
This protocol incurs a network overhead of 16 bits every ' seconds (compared to 47 bits per epoch
for the simple leader-based agreement protocol). Reception of the first incremented-DL message causes
the device drivers to set the DL fields of their local messages back to their original values, but before this
can complete, the next transmission (also with an incremented DL field) has already started. These two
messages have 8 extra data bits each (worst-case) which leads to the 16-bit overhead. On the CPU side, the
periodic process incurs some overhead. Moreover, while the network adapter's filter is disabled, the device
drivers must process two messages which may or may not be meant for that node. The device drivers must
perform filtering in software and discard messages not meant for their node. Measurements of these various
CPU overheads are in Section 6.
5 Implementation
In this section, we present schemes to implement MTS on Motorola's TouCAN module [19] which features
message buffers and internal arbitration between transmission buffers based on message ID. As such,
TouCAN is representative of modern CAN NAs.
In the following, we present a brief description of TouCAN, the problems faced when implementing
real-time scheduling on CAN, and our solution to these problems for MTS.
5.1 Motorola TouCAN
TouCAN is a module developed by Motorola for on-chip inclusion in various microcontrollers. TouCAN
lies on the same chip as the CPU and is interconnected to the CPU (and other on-chip modules) through
Motorola's intermodule bus. Motorola is currently marketing the MC68376 [19] microcontroller which
incorporates TouCAN with a CPU32 core.
TouCAN has 16 message buffers. Each buffer can be configured to either transmit or receive messages.
When more than one buffers have valid messages waiting for transmission, TouCAN picks the buffer with
the highest-priority ID and contends for the bus with this ID. In this respect TouCAN differs from older
CAN network adapters such as the Intel 82527 [20] which arbitrate between buffers using a fixed-priority,
daisy-chain scheme which forces the host CPU to sort messages according to priority before placing them in
the network adapter buffers. This was one of the main reason we picked TouCAN for implementing MTS.
At this time, TouCAN is available only with the MC68376 microcontroller. To implement MTS within
EMERALDS on TouCAN, we would first have to port EMERALDS to the MC68376 microcontroller. To
avoid this, we instead used device emulation [21] under which a general-purpose microcontroller is made
to emulate a network adapter. This emulator interfaces to the host CPU through an I/O bus. The emulator
presents the host CPU the same interface that the actual network adapter would. The emulator receives
commands from the host CPU, performs the corresponding actions, and produces the same results that the
actual network adapter would, thus providing accurate measurements of various overheads such as interrupt
handling and message queuing on host CPU. We use a 68040 board to emulate the TouCAN module and
connect it to the host CPU (another 68040) through a VME bus.
5.2 MTS on CAN
In implementing MTS on CAN, our goal is to minimize the average overhead suffered by the host node for
transmitting a message. This overhead has the following components:
1. Queuing/buffering messages in software if network adapter buffers are unavailable.
2. Transferring messages to network adapter.
3. Handling interrupts related to message transmission.
In CAN, priority inversion can be unbounded. If the adapter buffers contain low-priority messages, these
messages will not be sent as long as there are higher-priority messages anywhere else in the network. Con-
sequently, a high-priority message can stay blocked in software for an indeterminate period of time, causing
it to miss its deadline. Because of this priority inversion problem, any network scheduling implementation
for CAN (regardless of which scheduling policy - DM or MTS - is being implemented) has to ensure that
adapter buffers always contain the highest-priority messages and only lower-priority messages are queued
in software.
Suppose B buffers are allocated for message transmission (usually B is about two-thirds of the total
number of buffers; see Section 6). If the total number of outgoing message streams is B or less, then MTS's
implementation is straight-forward: assign one buffer to each stream. Whenever the CAN device driver
receives a message for transmission, it simply copies that message to the buffer reserved for that stream.
In this case, no buffering is needed within the device driver which also means that there is no need for the
CAN adapter to generate any interrupts upon completion of message transmission 3 , and this leads to the
lowest-possible host CPU overhead.
When number of message streams exceeds B, some messages have to be buffered in software. To
reduce host CPU overhead, we want to buffer the fewest possible messages while avoiding priority inversion.
Just as MTS treats low-speed and high-speed messages differently for scheduling purposes, we treat these
messages differently for implementation purposes as well. Our goal is to keep the overhead for frequent
messages (those belonging to high-speed periodic streams) as low as possible to get a low average per-message
overhead. In our implementation, if the number of periodic high-speed message streams NHp is
3 The CAN adapter must be programmed to generate interrupts if messages are queued in software waiting for adapter buffers to
become available, which is not the case here.
less than B, then we reserve NHp buffers for high-speed periodic streams and treat them the same as before
(no buffering in software).
The remaining buffers are used for high-speed sporadic, low-speed, and non-real-time
messages. As these messages arrive at the device driver for transmission, they are inserted into a priority-
sorted queue. To avoid priority inversion, the device driver must ensure that the L buffers always contain
the L messages at the head of the queue. So, if a newly-arrived message has priority higher than the lowest-priority
message in the buffer, it "preempts" that message by overwriting it. This preemption increases CPU
overhead but is necessary to avoid priority inversion. The preempted message stays in the device driver
queue and is eventually transmitted according to its priority.
Among these L buffers, the buffer containing the I +1 th lowest priority message is configured to trigger
an interrupt upon message transmission (I is defined later). This interrupt is used to refill the buffers with
queued messages. I must be large enough to ensure that the bus does not become idle while the interrupt is
handled and buffers are refilled. Usually an I of 1 or 2 is enough (which can keep the bus busy for 47-94 -s
minimum). Note that this puts a restriction on L that it must be greater than I. Making L less than or equal
to I can lead to the CAN bus becoming idle while the ISR executes, but makes more buffers available for
high-speed periodic messages. This can be useful if low-speed messages make up only a small portion of
the workload and high-speed sporadic messages are either non-existent or very few.
If NHp - B then we must queue even high-speed periodic messages in software. Then we have a single
priority-sorted queue for all outgoing messages and all B buffers are filled from this queue.
For streams with dedicated buffers, the CPU overhead is just the calculation of the message ID and transferring
the message data and ID to the network adapter. Note that message data can be copied directly from
user space to the network adapter to keep overhead to a minimum.
For messages which are queued in software, there is an extra overhead of inserting the message in the
queue (including copying the 8 or fewer bytes of message data from user space to device driver space before
inserting in the queue), plus the overhead for handling interrupts generated upon message transmission.
This interrupt overhead is incurred once every I message transmissions, where Q is the number of
buffers being filled from the queue (Q can be B or L depending on whether high-speed periodic messages
are buffered or not). Also, each message will potentially have to preempt one other message. The preempted
message had already been copied to the network adapter once and now it will have to be copied again, so
the preemption overhead is equivalent to the overhead for transferring the message to the network adapter.
Table
1 summarizes the overheads for various types of messages. Measurements of these overheads are in
Section 6.
Note that DM scheduling also incurs similar overheads. The only difference is that the ID of message
streams under DM is fixed, so a new ID does not have to be calculated each time. Other than that,
implementing DM on TouCAN is no different than implementing MTS.
Message type Overhead
Not queued Calculate ID + copy to NA
Queued Calculate ID + insert in priority queue + copy to NA
Table
1: Summary of overheads for MTS's implementation on TouCAN.
6 Results
Schedulability of MTS as compared to DM and ED has been evaluated and published in [1, 2]. Here, we
present a measurement of various MTS implementation overheads and their impact on MTS schedulability.
The overhead measurements for implementation of MTS on a 25MHz Motorola 68040 (no cache) with
the EMERALDS RTOS are in Table 2. From this data, we see that high-speed messages with dedicated
network adapter buffers incur an overhead of
transfer to NA
Operation Overhead (-s)
Calculate ID (high-speed messages) 3.0
Insert in priority queue (including copying to device driver memory) 6.3
Transfer message to NA (8 data bytes) 7.8
Preempt message 7.8
handling and dequeuing of transmitted messages 42.4
Miscellaneous (parameter passing, etc.) 6.0
Table
2: CPU overheads for various operations involved in implementing MTS.
If high-speed periodic messages are queued, then average per-message overhead depends on the number
of buffers used for transmission (Q). TouCAN has buffers. Of these, 5-6 are usually used for message
reception and their IDs are configured to receive the various message streams needed by the node. This
leaves about 10 buffers for message transmission. Then, under worst-case scenario, message transmission
incurs an average overhead (assuming I = 2):
calculation queuing
where the worst-case l Q is the total number of message streams using that queue. Low-speed and non-
real-time messages have fixed IDs, so they incur an overhead of 33:2 low-speed and
high-speed messages share the same queue.
If high-speed messages are using dedicated buffers, then I is smaller for low-speed messages. Assuming
only 3 buffers are available and I = 2, then low-speed and non-real-time messages incur overheads
of 70:3+1:55l Q -s/msg while high-speed sporadic messages have overheads of 73:3+1:55l Q -s/msg.
From these numbers we see that if a certain node has 7 high-speed periodic streams, 1 high-speed
streams of low-speed and non-real-time messages, and if the high-speed periodic messages
make up 90% of the outgoing traffic while high-speed sporadic/low-speed/non-real-time
messages, then average per-message overhead comes to
Overhead is significantly higher if the number of high-speed periodic streams is large enough that high-speed
messages have to be queued. In that case, per-message overhead can be twice as much as the overhead when
high-speed periodic streams have dedicated buffers. Fortunately, real-time control applications do not have
more than 10-15 tasks per node (the well-known avionics task workload [22, 23] - which is accepted as
typifying real-time control applications - is an example). Not all tasks send inter-node messages and those
that do typically do not send more than 1-2 messages per task. This indicates that for most applications, dedicated
buffers should be available for high-speed message streams, resulting in a low per-message overhead
in the 20-25-s range.
We used a simple linked list to sort messages in the priority queue. This works well for a small number
of messages (5-10) that typically need to be in the queue. For larger number of messages, a sorted heap will
give lower overhead.
Note that these overheads are applicable to DM as well. Only difference is that under DM, the ID does
not have to be calculated, so per-message overhead will be 3-s less than for MTS.
ID Re-adjustment at End of Epoch
Table
3 lists the CPU overheads incurred during the ID update protocol. Overhead for the periodic task
includes all context switching and CPU scheduling overheads. One context switch occurs when the task
wakes up and another when the task blocks. Both of these are included in the overhead measurements.
Operation Overhead (-s)
Periodic task 68.0
Device driver interrupt (message arrival) 40.4
Read message from NA (8 data bytes) 7.8
Software filtering and DL lookup 3.0
ID update 2.8 per message
Table
3: CPU overheads for various operations involved in updating message IDs.
During each ID update, the device driver receives two messages (each incurring an overhead of 40:4
including all context switching overheads). After receiving the first message, IDs of
high-speed messages are updated. Assuming IDs of 5 messages need to be updated, the total overhead per
epoch becomes 184:4-s. If 2ms, the ID update takes up about 9% of CPU time. This motivates us to
increase '.
Increasing ' increases the level of quantization of deadlines which results in reduced schedulability for
high-speed messages. But on the other hand, the network overhead associated with ID updates (16 bits
per epoch) decreases, leading to increased schedulability. For bits per epoch consume
only 0:8% of the network bandwidth for a 1Mb/s bus, but their impact on network schedulability (due to
their blocking effect) is much higher. Our measurements showed that with this extra overhead, about 2-3
percentage points fewer workloads are feasible under MTS (for the same workload utilization) than without
this overhead. As such, increasing ' can result in a sizeable improvement in schedulability due to reduced
ID update overhead which can offset the loss in schedulability due to coarser quantization.
Figure
4 shows the effect of increasing ' on schedulability. For each data point, we generate 1000
workloads and measure the percentage found feasible under MTS using the schedulability conditions in [2].
Each workload has with 8-15 high-speed periodic streams, 2 or 6 high-speed sporadic streams, 25 low-speed
periodic streams, and 4 low-speed sporadic streams. Deadlines of high-speed messages are set randomly
in the 0.5-2ms range while those for low-speed messages are set randomly between 2-100ms. Periods of
periodic messages are calculated by adding a small random value to the deadline, while MIT of sporadic
streams is set to 2s (for both low-speed and high-speed sporadic streams). Different data points are obtained
by varying the number of high-speed periodic streams from 8 to 15 which leads to a variation in workload
utilization roughly in the 50-100% range. All results include the overhead resulting from 16 extra bits per
epoch for ID updates.
This Figure shows that when ' is doubled from 2ms to 4ms, network schedulability is actually improved
slightly when two high-speed sporadic streams are in the workload. But when six sporadic streams are used,
loss in schedulability from coarser quantization is more than the gain from reduced ID update overhead,
so that 1-2 percentage points fewer workloads are feasible. These results show that for light-to-moderate
high-speed sporadic loads, increasing ' to 4ms continues to give good performance, and even for heavy
high-speed sporadic loads, 4ms results in only a slight degradation in performance.
If ' is increased to 3ms, then the ID update CPU overhead reduces to about 6% of CPU time, whereas
becomes 4.6% of CPU time.
7 Conclusion
The CAN standard message frame format has an 11-bit ID field. If fixed-priority scheduling (such as DM)
is used for CAN, some of these bits go unused. The idea behind MTS is to use these extra bits to enhance
network schedulability. MTS places a quantized form of the message deadline in these extra bits while
using the DM-priority of messages in the remaining bits. This enhances schedulability of the most frequent
messages in the system (high-speed messages) so that MTS is able to feasibly schedule more workloads
than DM.
Utilization (%)20.060.0100.0
Percent
feasible
workloads
l=2ms
l=4ms
l=2ms
l=4ms
Figure
4: Impact of changing ' on MTS schedulability.
Since message IDs are based on deadlines, they must be periodically updated. We presented a protocol
to perform this update without any priority inversion. This protocol consumes about 5-6% of CPU time,
but considering the large improvements in network schedulability that MTS displays over DM, this extra
overhead is justified.
We also presented a scheme to implement MTS on the TouCAN network adapter which is representative
of modern CAN network adapters. The biggest challenge in implementing CAN scheduling (be it MTS
or DM) is controlling priority inversion within the network adapter. We showed that because of CAN's
characteristics (short message size), preemption of a message in the adapter by a newly-arrived, higher-priority
outgoing message is an effective method for avoiding priority inversion.
A future avenue of research can be to study message reception issues for CAN to try to reduce the
average per-message reception overhead. Unlike message transmission, message reception does not depend
on which network scheduling policy (DM or MTS) is used. Message reception overheads can be reduced
by optimizing interrupt handling, using polling (instead of interrupts) to detect message arrival, or using a
combination of interrupts and polling.
--R
"Non-preemptive scheduling of messages on Controller Area Network for real-time control applications,"
"Scheduling messages on Controller Area Network for real-time CIM applications,"
"Smart networks for control,"
"TTP - a protocol for fault-tolerant real-time systems,"
Road vehicles - Interchange of digital information - Controller area network (CAN) for high-speed communication
"An inside look at the fundamentals of CAN,"
"Analyzing real-time communications: Controller Area Network (CAN),"
"Calculating Controller Area Network (CAN) message response times,"
"Real-time communications in a computer-controlled workcell,"
"EMERALDS: A microkernel for embedded real-time systems,"
"On the complexity of fixed-priority scheduling of periodic, real-time tasks,"
"On non-preemptive scheduling of periodic and sporadic tasks,"
"Simple and integrated heuristic algorithms for scheduling tasks with time and resource constraints,"
"A scheme for real-time channel establishment in wide-area networks,"
"Real-time communication in multi-hop networks,"
"On the ability of establishing real-time channels in point-to-point packet-switched networks,"
"Implementing a distributed high-resolution real-time clock using the CAN-bus,"
MC68336/376 User's Manual
Communications Controller Architectural Overview
"The END: An emulated network device for evaluating adapter design,"
"Building a predictable avionics plarform in Ada: A case study,"
"Generic avionics software specification,"
--TR | priority inversion;Controller Area Network CAN;message scheduling;distributed real-time systems;network scheduling implementation |
332948 | Analysis of ISP IP/ATM network traffic measurements. | This paper presents network traffic measurements collected from a commercial Internet Service Provider (ISP) whose traffic is being carried over an ATM backbone network. Much of the aggregate traffic is Web-related, and thus represents a Web/TCP/IP/AAL-5/ATM protocol stack. Four traces have been collected at the AAL-5 frame level, using a NavTel IW95000 ATM test set. These traces provide a detailed look at protocol-level behaviours, but only for rather short time durations (e.g., 30-40 seconds per trace). Analyses focus on aggregate-level traffic characteristics, as well as host-level and TCP connection-level traffic characteristics.The main workload characteristics observed from the measurements are the burstiness of the aggregate and individual traffic flows, a non-uniform distribution of traffic amongst hosts, a trimodal packet size distribution, and the strong presence of network-level effects (e.g., client modem speed, network Maximum Transmission Unit (MTU), network round trip time, TCP slow start) in the traffic structures seen. The workload characteristics are quite consistent across the four traces studied. | Introduction
This paper presents network traffic measurements
from a commercial Internet Service Provider
(ISP) environment in western Canada. This ISP
offers Internet service (primarily Web access) to
its customers, and is using an OC-3 (155 Mbps)
ATM backbone network to carry the traffic. The
TCP/IP packet traffic is converted to ATM cells
using AAL-5 as the adaptation layer protocol,
resulting in aggregate traffic that represents a
Web/TCP/IP/AAL-5/ATM protocol stack.
The network traffic measurements have been
collected using a GN NetTest NavTel IW95000
ATM measurement device. These measurements
provide fine-grain traffic information (i.e., complete
packet payloads) over short time periods (e.g., 30-
A total of four traces have been collected to
date. These measurements provide enough information
for off-line protocol decoding at the AAL-5,
IP, and TCP layers, making possible the identification
and characterization of individual traffic flows
(i.e., TCP connections), as well as aggregate traffic
characteristics, in our analyses.
The primary purpose of our measurements is
workload characterization: trying to understand
the characteristics of end-user Web/TCP/IP/ATM
network traffic. We are especially interested in characterizing
the aggregate traffic stream as a function
of its constituent parts, since understanding statistical
multiplexing behaviours is important for net-work
traffic management.
The main highlights from our measurements
are the following:
ffl the burstiness of the aggregate and individual
traffic flows;
ffl the non-uniform distribution of the generated
traffic amongst hosts;
ffl the asymmetric nature of client-server traffic
patterns;
ffl the presence of network-level effects (e.g., client
modem speed, network Maximum Transmission
Unit (MTU), network round trip time,
slow start) in the traffic structures observed
We also find that a log-normal distribution provides
a reasonable fit for connection duration and
bytes per connection, similar to observations made
by other researchers [1, 5]. The short duration of
our traces unfortunately precludes the analysis of
heavy-tailed behaviours in the Web traffic, which
have an important impact on traffic characteristics
(e.g., network traffic self-similarity [4]).
The remainder of this paper is organized as
follows: Section 2 describes the measurement en-
vironment, and the tools used for data collection
and analysis. Section 3 presents the aggregate-level
traffic measurement results, and Section 4 presents
the TCP connection-level traffic measurement results
for the four traces. Finally, Section 5 concludes
the paper.
Measurement Methodology
The measurements reported in this paper were
collected using a NavTel IW95000 ATM test set.
This test set provides for the non-intrusive capture
of complete ATM cell-level or AAL-5 frame-level
traffic streams, including packet headers and payloads
The test set timestamps each ATM cell or
frame with a one microsecond timestamp resolu-
tion, and records the captured traffic into memory
in a compressed proprietary binary data format.
The size of the memory capture buffer on the ATM
test set (e.g., 24 Megabytes RAM on the test set
that we used) determines the maximum amount of
data that can be captured at a time. This capture
buffer size, along with the volume of the network
traffic to be measured, determines the maximum
interval of time for which traces can be collected
(e.g., several seconds at full OC-3 rates, and several
minutes at rates). For our traces, we were able
to record about 35 seconds worth of traffic data at
a time.
Once the capture buffer is full, traces can be
saved to disk or copied to another machine using
ftp, for off-line trace analysis. The analyses reported
in this paper used a C program specially
written to decode the uncompressed proprietary
data format used by the NavTel IW95000. This
program converts the binary data file into an ASCII
format with TCP/IP protocol information.
An example of the human-readable trace format
is shown 1 in Figure 1. This format includes a
timestamp (in microseconds, relative to the start of
the trace), the protocol type(s) recognized (if any),
and then selected fields from the IP and TCP packet
header (if applicable), such as IP source and destination
address, IP packet size (including TCP and
IP headers), TCP source and destination port num-
bers, and TCP sequence number information, both
for data and for acknowledgments. Once available
in this latter form, it is straightforward to construct
customized scripts to process a trace file and extract
the desired information, such as timestamp, packet
size, as well as IP and TCP protocol information.
A total of four traces were captured with the
NavTel IW95000 test set. The first trace was collected
on May 3, 1998. This trace served as the
"guinea pig" for the design and debugging of the
programs for trace decoding and trace analysis. The
next three traces were collected on May 13, 1998.
1 Note that the IP addresses in Figure 1 (and throughout the paper)
have been "sanitized" to conceal their true identities.
The traces were taken one at a time, approximately
minutes apart, in the evening hours of a single
working day, and are deemed to be representative
samples of ISP network traffic.
Table
1 provides a statistical summary of the
four traces collected. The remaining sections report
on the analysis of these traces.
3 Aggregate-Level Traffic Characteristics
This section discusses aggregate traffic characteristics
for the four traces. The analyses focus
on traffic profile, packet size distribution, and host-level
traffic characteristics.
3.1
Figure
2 shows the traffic profile for one of the
four traces of ISP network packet traffic (Trace 2).
This trace contains 45,434 packets transmitted in
36.48 seconds, for an overall mean bit rate of 4.15
Mbps. The graphs in Figure 2 show the volume of
traffic (in bits, including all protocol headers) transmitted
per time interval, for three different time interval
sizes: 1.0 seconds in Figure 2(a), 0.1 seconds
in
Figure
2(b), and 0.01 seconds in Figure 2(c).
The graphs in Figure 2 show that the network
traffic is bursty, across several time scales. The aggregate
bit rate hovers around the overall mean of
4.15 Mbps, with a low of 3.0 Mbps and a high of
6.1 Mbps during the interval measured. The burstiness
is even more pronounced at the finer-grain time
scales illustrated, for which more data points are
plotted. This burstiness across time scales is indicative
of network traffic self-similarity [11, 18], though
this trace is far too short in duration to make any
rigourous claim to this effect.
3.2 Packet Size Distribution
Further understanding of the ISP network traffic
characteristics can be gained by studying the IP
packet size distribution. Figure 3 shows the frequency
distribution of the observed IP packet sizes
(including headers), using a linear scale on the vertical
axis in Figure 3(a), and a logarithmic scale on
the vertical axis in Figure 3(b). These results are
cumulative for all four traces.
Figure
3(a) shows that the IP packet size distribution
is trimodal. The distribution is dominated
(30%) by 40-byte packets, which carry TCP acknowledgements
(and no TCP data). There is a
significant spike (12%) at 1500 bytes, which is the
Maximum Transmission Unit (MTU) size used on
14966 IP TCP 561.877.104.57 7410 427.86.12.704 80 508 410104 32779
22126 IP TCP 582.127.755.91 1291 419.74.87.6
Figure
1: An Example of a TCP/IP Trace
Bits
Transmitted
per
Interval
Time in Seconds
Traffic Profile for Web/TCP/IP/ATM Trace 2
(a)2000006000001e+06
Bits
Transmitted
per
Interval
Time in Seconds
Traffic Profile for Web/TCP/IP/ATM Trace 2
Bits
Transmitted
per
Interval
Time in Seconds
Traffic Profile for Web/TCP/IP/ATM Trace 2
(c)
Figure
2: Traffic Profile for ISP IP/ATM Trace
2: (a) Bits Transmitted per 1.0 Second Inter-
val; (b) Bits Transmitted per 0.1 Second Interval;
(c) Bits Transmitted per 0.01 Second Interval
Mbps Ethernet local area networks. 2 There are
also many TCP/IP packets in the range of 300-600
bytes, which are typical sizes for packets traversing
the wide-area Internet backbone.
There are many other PDU sizes observed (see
Figure
3(b)), though the distribution is clearly dominated
by the main peaks described above.
3.3 Host-Level Analysis
One advantage of fine-grain traffic measurements
is that it is possible to extract IP (and higher
protocol information from the traces cap-
tured. In fact, we have used this capability to identify
individual IP traffic streams in our traces, in
an attempt to characterize their behaviour in the
context of the overall traffic.
We identified over 1000 different IP source and
destination addresses in our traces (see Table 1),
indicating that there is a large user community using
this ISP's services. We have also noted that (at
least in our relatively short traces) the traffic generated
by these IP addresses is highly non-uniform.
For example, it is not uncommon for 10% of the active
IP source addresses in a trace to generate over
80% of the IP packet traffic volume. The destination
address distribution is also highly non-uniform.
Detailed statistics regarding the 10 busiest IP
source and destination addresses in one of the traces
(Trace 1) are shown in Table 2. These 10 IP addresses
in combination account for over 50% of the
2 The primary Web site hosted by the ISP is actually on a 10 Mbps
Ethernet LAN, while the collection point for our measurements is an
OC-3 backbone ATM link used by the telecommunications provider
to carry the ISP's Internet traffic.
Table
1: Summary of ISP IP/ATM Traffic Traces
Trace 0 Trace 1 Trace 2 Trace 3
Trace Date May 3/98 May 13/98 May 13/98 May 13/98
Trace Time 7:00pm 7:00pm 7:30pm 8:00pm
Trace Size (bytes) 16,876,032 22,129,152 22,129,152 22,129,152
Trace Duration (sec) 34.30 36.42 36.48 36.79
Mean packets/sec 977 1,292 1,245 1,304
Mean bit rate (Mbps) 3.38 4.11 4.15 4.05
AAL-5 PDUs 33,524 47,057 45,434 47,977
IP Packets 33,349 46,764 45,172 47,684
TCP/IP Packets 24,257 34,072 31,914 31,417
Web/TCP/IP Packets (port 80) 17,054 24,795 24,407 22,175
IP Source Addresses 616 864 953 1019
IP Destination Addresses 662 915 1002 1064
Total IP Addresses 696 954 1054 1123
Mean AAL-5 PDU size (bytes) 435 400 418 392
Mean IP packet size (bytes) 405 370 388 3635152535
Relative
Frequency
in
Percent
IP Packet Size in Bytes
IP Packet Size Distribution in Web/TCP/IP/ATM Measurements (All Traces)
(a)0.0010.1100 200 400 600 800 1000 1200 1400 1600
Relative
Frequency
in
Percent
IP Packet Size in Bytes
IP Packet Size Distribution in Web/TCP/IP/ATM Measurements (All Traces)
(b)
Figure
3: IP Packet Size Distribution for ISP
IP/ATM Traffic Measurements: (a) Linear Scale on
the Vertical Axis; (b) Logarithmic Scale on the Vertical
Axis
packet traffic in this trace. This observation is not
surprising, because of the client-server architecture
used for many Internet services. However, it is
worth noting that the traffic to and from a specific
IP address can be just as bursty, or even more
bursty, than the overall aggregate traffic flow [18].
For example, Figure 4 shows the traffic volume to
and from a selected IP address from Table 2, representing
a Web server. The traffic profile is plotted
at the 0.1 second time granularity, similar to Figure
2(b). Note that the vertical scale is different on
each of the graphs.
3.4
Summary
This section has looked at aggregate-level traffic
characteristics in the ISP network traffic mea-
surements. The analyses have identified the bursty
nature of the traffic flow, a trimodal packet size
distribution, and a non-uniform distribution of the
generated traffic amongst the hosts using the net-
work. The next section proceeds to analyze TCP
connection-level traffic characteristics in the same
four traces.
4 Connection-Level Traffic Characteristics
This section zooms in on the measurement
data to examine finer-grain connection-level and
protocol-level behaviours.
4.1 Connection-Level Characteristics
Table
3 summarizes the number of TCP connections
observed in each of the four traces, as well
Table
2: Traffic for Selected IP Addresses (Trace 1)
IP Packets % of Total Packets % of Total Total % of Total
Address Sent Packets Sent Recd Packets Recd Packets Packets
Total 17,241 36.87 8,403 17.97 25,644 54.8450001500025000
Bits
Transmitted
Per
Interval
Time in Seconds
Traffic Profile to Busy Web Server (Trace 1,
Bits
Transmitted
Per
Interval
Time in Seconds
Traffic Profile from Busy Web Server (Trace 1,
(b)
Figure
4: Traffic Profile for a Selected IP Address:
(a) Traffic to the Web Server; (b) Traffic from the
Web Server
as some statistical characteristics for these TCP
connections. A TCP connection refers to the bidirectional
exchange of TCP/IP packets between a
pair of IP host addresses, using a specific source and
destination port for the connection, and (in normal
cases) a monotonically increasing sequence number
(and acknowledgement number) for the data bytes
transferred. The establishment of a new TCP connection
is done using a three-way handshake, using
the "SYN" (Synchronize) flag in the TCP packet
header. The termination of a TCP connection is
done with a similar three-way handshake using a
"FIN" flag [17].
Table
3 shows that there are about 500 completed
connections per trace. In this table,
a "complete" TCP connection is defined as one for
which the opening "SYN" and the closing "FIN"
were seen during the trace, while an "incomplete"
TCP connection is one in which the "SYN" or the
"FIN" (or both) were not present in the trace. Note
that this definition biases our results, since only
"short-lived" TCP connections will qualify as complete
TCP connections in our analysis, and these
connections account for less than half of the total
packet volume in the traces. Nevertheless, this approach
does identify over 2000 complete connections
from the four traces, which can be used for work-load
characterization. Over 95% of these completed
connections are Web-related (i.e., TCP port 80).
Figure
5 provides a closer look at the distribution
of packets, bytes, and time per (complete) TCP
connection. These graphs show the empirically observed
PDF for each distribution (as a histogram),
as well as an approximate log-normal fit to the distribution
(as a dashed line), using the same mean
Table
3: Summary of TCP Connection-Level Characteristics
Trace 0 Trace 1 Trace 2 Trace 3
Complete TCP Connections 425 586 642 458
Total IP Packets 10,352 10,445 12,065 9,715
Min packets per connection 6 3 6 5
Mean packets per connection 24.4 17.8 18.8 21.2
Max packets per connection 2,645 380 222 288
Min data bytes per connection
Mean data bytes per connection 10,519 5,474 5,595 6,992
Max data bytes per connection 1,976,090 229,673 81,027 221,935
Min duration per connection 0.131 0.177 0.194 0.158
Mean duration per connection 4.532 5.091 5.159 5.533
Max duration per connection 29.588 33.832 33.253 33.006
Incomplete TCP Connections 579 888 919 949
Total TCP Connections 1,004 1,474 1,561 1,407
and standard deviation. Note that the horizontal
axes on all three graphs represents a logarithmic
scale (base 2). While the fit is by no means tight,
the general trend in the data suggests that a log-normal
fit for the body of the distribution is a reasonable
hypothesis.
The graphs in Figure 5 support the general
observation that most connections are short: the
average connection exchanges about 6 kilobytes of
application-level data using packets, and lasts
about 5 seconds. These observations are consistent
with other measurements of Web traffic workloads
[1, 2, 4, 5] and TCP/IP traffic characteristics
[3, 15, 16] in a variety of environments [6, 8, 10].
4.2 Asymmetry of Client-Server Traffic
Because much of the traffic in our collected
traces is Web-related, there is an asymmetric client-server
structure to the traffic. That is, the client
typically sends few data bytes (e.g., the URL for a
requested document), while the server tends to send
more data bytes (e.g., the returned document).
This asymmetric behaviour is illustrated
graphically in Figure 6. These results are reported
for Trace 1, in which a total of 586 complete TCP
connections were recognized. Figure 6(a) shows the
number of packets transmitted in each direction on
these connections, using the downward vertical axis
(i.e., negative values) to represent the number of
client packets transmitted, and the upward vertical
axis (i.e., positive values) to represent the number
of server packets transmitted. As can be seen, there
is a direct correlation between the number of packets
sent in each direction on a given connection,261014
Frequency
in
Percent
Log2(Number of Data Bytes Exchanged in Both Directions)
Distribution of Bytes per Complete TCP Connection (All Traces)
Frequency
in
Percent
Log2(Number of Packets Exchanged in Both Directions)
Distribution of Packets per Complete TCP Connection (All Traces)
Frequency
in
Percent
Log2(Number of Seconds Between Opening SYN and Closing FIN)
Distribution of Time Duration per Complete TCP Connection (All Traces)
(c)
Figure
5: Distributional Characteristics of TCP
Connections (All Traces): (a) Number of Packets
Exchanged Per Connection; (b) Number of Bytes
Exchanged Per Connection; (c) Time Duration of
Connection
Number
of
Packets
Transmitted
Connection Number
Time Series Illustration of Client/Server Packets per TCP Connection (Trace 1)
Server
Client
(a)
-5000050000150000250000
Number
of
Data
Bytes
Transmitted
Connection Number
Time Series Illustration of Client/Server Bytes per TCP Connection (Trace 1)
Server
Client
(b)
Figure
Illustration of Asymmetric Nature of
Client/Server Traffic (Trace 1): (a) Packets per
Connection; (b) Bytes per Connection
though the client in general tends to send fewer
packets. 3 One of the reasons for this is the "delayed
acknowledgement" strategy of TCP; in many cases,
the receiver of a data transfer generates one TCP
acknowledgement for every two TCP data segments
that arrive [17].
The asymmetry of traffic flows is more clearly
evident in Figure 6(b). This figure shows the number
of data bytes transmitted in each direction on
the 586 connections, using the downward vertical
axis (i.e., negative values) to represent the number
of client bytes transmitted, and the upward vertical
axis (i.e., positive values) to represent the number
of server bytes transmitted. The volume of server
bytes per connection ranges widely, from several
hundred bytes to several hundred kilobytes in this
trace. The volume of client bytes per connection
is generally low (e.g., tens of bytes to hundreds of
bytes). The most bytes transmitted on a TCP connection
by any client in this trace was 3,649 bytes.
3 There is an anomaly, however, for connection 290. Detailed analysis
of this connection shows that the client generated 340 TCP acknowledgement
packets during the 38-packet transfer of 16,253 bytes
from the server. Many of these packets were closely-spaced duplicate
acknowledgements. We attribute this anomaly to a bug in the client's
implementation, perhaps with respect to timer settings.
4.3 Detailed Connection-Level Analysis
Figure
7 presents a detailed illustration of one
particular IP host's connection activity. In partic-
ular, this graph illustrates communication between
client IP address 274.193.285.435 and server IP address
812.91.635.174 (from Table 2). The latter IP
address represents a Web server, and the packets
illustrated in Figure 7 represent Web-browsing activity
by the client.
The topmost graph, Figure 7(a), is a two-dimensional
portrayal of the packet traffic between
the client and the server. The horizontal axis represents
the time at which the packet was seen in the
trace. This time ranges from 0 to seconds. The
vertical axis represents the size of the IP packet.
With this approach, the "full" TCP data packets
generated by the server are easily distinguishable
from the TCP acknowledgement packets generated
by the client. The size of the IP packets used
(552 bytes) suggests that a TCP Maximum Segment
Size (MSS) of 512 bytes is being used, perhaps
because of a network-level constraint on the
Maximum Transmission Unit (MTU) size.
The sequence of dark vertical "bands" in Figure
7(a) represents the TCP connections used to
transfer individual documents using HTTP 1.0.
There are 11 such complete connections in this trace
(the first TCP connection is actually "joined in
progress" at time 0 in the trace). The horizontal
"white space" in the plot represents either user
"think time" between URL requests, or the processing
time between the generation of each TCP connection
to download embedded documents within a
Web page. 4
Figure
7(b) provides a more detailed look at
one particular complete TCP connection (namely,
the one that starts with a three-way SYN hand-shake
(initiated by the client) at time 2.96 seconds.
Once the connection is open, the client sends a 465-
byte IP packet carrying the information about the
requested URL. Upon receipt of this packet, the
server generates the first 1 KB of the response (im-
plying an initial TCP congestion control window
size of 1024 bytes) to the URL requested (requiring
two packets in this case, because of the MTU used,
though both packets have the same timestamp in
4 Note that the usage of TCP connections for Web document transfers
will change significantly when the "persistent connection" feature
of HTTP 1.1 is in widespread use [7, 12]. A persistent connection allows
multiple URL requests to be "pipelined" on the same TCP con-
nection. This connection is then used to return each of the requested
documents in turn. We found no evidence of persistent connections
used in our traces from May 1998.
our trace). The connection then proceeds through
the TCP slow-start algorithm [9].
Figure
7(c) provides a TCP sequence number
plot illustrating this behaviour more clearly. In
this plot, the horizontal axis represents time (us-
ing the same time scale as Figure 7(b)), and the
vertical axis represents the TCP sequence number
(in bytes). A '+' denotes a data packet transmitted
by the server, while an 'X' denotes an acknowledgement
packet transmitted by the client. The vertical
spacing between each '+' indicates the relative
size of each packet (i.e., the number of data bytes
contained within each TCP/IP packet), while the
horizontal spacing between a packet and the corresponding
acknowledgement reflects the round-trip
time (RTT) between the client and server, as observed
at the fixed measurement point. The cumulative
(and delayed) acknowledgement strategy
of TCP is clearly evident, as is the evolutionary
growth of the TCP slow-start congestion window
size. The connection terminates (shortly after time
4.0 seconds) as the final data packets are acknowl-
edged, and the FIN handshake completes. This
connection exchanges a total of 16,951 bytes using
packets (16 sent by the client, and 31 by
the server). The Web document itself accounts for
14,632 of these bytes.
The (almost constant) horizontal spacing between
the clumps of data packet transmissions in
Figure
are indicative of the round-trip time
(RTT) between the client and the server, which
is approximately 150 milliseconds in this example.
This time is a function of client modem speed (e.g.,
the transmission of two 512-byte IP packets over a
64,000 bits per second narrowband-ISDN link would
take 128 milliseconds), TCP/IP packet processing
overhead, physical propagation delay, and other
bottlenecks (e.g., queueing points at routers) in the
network between the client and the server. The fact
that the acknowledgement packet transmissions are
very close (in time) to the next data packet transmissions
merely indicates that the ATM test set was
measuring traffic at a point very close to the server
(i.e., on the server's network).
Similar analyses of other IP source addresses
and their TCP-level connection behaviours are possible
using this technique. An analysis of the inter-arrival
times between TCP connections is planned
for the near future.1003005000 5e+06 1e+07 1.5e+07 2e+07 2.5e+07 3e+07
IP
Packet
Size
in
Bytes
Time in Microseconds
Traffic Profile for a Selected IP Source-Destination Pair (Trace 1)
(a)1003005002.8e+06 3e+06 3.2e+06 3.4e+06 3.6e+06 3.8e+06 4e+06 4.2e+06
IP
Packet
Size
in
Bytes
Time in Microseconds
Traffic Profile for a Selected IP Source-Destination Pair (Trace 1)
(b)3880003920003960004000002.8e+06 3e+06 3.2e+06 3.4e+06 3.6e+06 3.8e+06 4e+06 4.2e+06
Sequence
Number
in
Bytes
Time in Microseconds
TCP Sequence Number Plot for a Selected TCP Connection (Trace 1)
TCP Sequence Number (Server)
TCP Ack Number (Client)
(c)
Figure
7: Selected IP Traffic Flow: (a) Many TCP
Connections over One TCP Connection
over 1.4 Seconds; (c) TCP Sequence Number
Plot for Selected TCP Connection
4.4
Summary
This section has presented TCP connection-level
analyses of four ISP IP/ATM traffic traces.
Because of the short duration of the traces, the completed
connections observed are "short-lived".
Many of these TCP connections represent Web
transfers, for which the numbers of bytes, packets,
and time duration per connection can be reasonably
modeled with log-normal distributions. Fine-grain
analysis of the traffic identifies protocol-specific be-
haviours, such as TCP window-based flow control,
and multiple TCP connections between client and
server for Web document transfers.
Conclusions
This paper has presented low-level measurements
of Internet traffic flowing to and from an
Internet Service Provider over an ATM backbone
network. The measurements are useful in terms
of characterizing the individual IP traffic streams
generated by Internet users, as well as for characterizing
the aggregate traffic. Analyses show that
much of the traffic in the traces is Web-related, and
that traffic is bursty both at the aggregate-level and
at the individual host-level. Furthermore, there is
significant traffic structure at fine-grain time scales
due to TCP/IP and HTTP protocol behaviours.
our measurements confirm that Internet
users generate very demanding traffic for
telecommunications networks to handle. It is reassuring
to note, however, that the burstiness of
the aggregate traffic is less than the burstiness of
a single source, indicating that there is significant
statistical multiplexing gain across sources. This
observation is consistent with earlier work assessing
the effective bandwidth requirements of multiplexed
self-similar traffic streams [6, 13, 14]. We hope that
this multiplexing gain continues as the number of
Internet subscribers increases.
Measurements of Web and IP/ATM traffic over
a longer time duration are clearly needed in order
to assess (with any statistical rigour) the presence
of heavy-tailed distributions in the traffic, as well as
the presence of long-range dependence in the traf-
fic, if any. These measurements are being pursued
in cooperation with the TeleSim project, and with
CANARIE.
Acknowledgements
The measurements presented in this paper
were collected by Telus Advanced Communications
and made available to researchers in the
project. Financial support for this re-search
was provided by Newbridge Networks, as
well as an NSERC Collaborative Research and Development
(CRD) grant to the ATM-TN TeleSim
project (CRD183839), and NSERC research grant
OGP0121969.
--R
"Workload Characterization of a Web Proxy in a Cable Modem Environment"
"Internet Web Servers: Workload Characterization and Performance Implications"
"Characteristics of Wide-Area TCP Con- versations"
"Self-Similarity in World Wide Web Traffic: Evidence and Possible Causes"
"Ef- ficient Policies for Carrying Web Traffic over Flow-Switched Networks"
"Network Traffic Measurements of IP/FrameRelay/ATM"
"Performance Interactions Between P-HTTP and TCP Im- plementations"
"Modeling the Performance of HTTP over Several Transport Protocols"
"Congestion Avoidance and Con- trol"
"A Measurement Analysis of Internet Traffic over Frame Relay"
"On the Self-Similar Nature of Ethernet Traffic (Extended Version)"
"The Case for Persistent Connection HTTP"
"On the Use of Fractional Brownian Motion in the Theory of Connectionless Networks"
"Effective Band-width of Self-Similar Traffic Sources: Theoretical and Simulation Results"
"Empir- ically Derived Analytic Models of Wide-Area TCP Connections"
"Wide Area Traffic: The Failure of Poisson Modeling"
TCP/IP Illustrated
"Self-Similarity Through High- Variability: Statistical Analysis of Ethernet LAN Traffic at the Source Level"
--TR
--CTR
Rachid Ait Yaiz , Geert Heijenk, Polling Best Effort Traffic in Bluetooth, Wireless Personal Communications: An International Journal, v.23 n.1, p.195-206, October 2002 | network traffic measurement;ATM networks;TCP/IP;wworkload characterization |
333131 | Improved Approximation Guarantees for Packing and Covering Integer Programs. | Several important NP-hard combinatorial optimization problems can be posed as packing/covering integer programs; the randomized rounding technique of Raghavan and Thompson is a powerful tool with which to approximate them well. We present one elementary unifying property of all these integer linear programs and use the FKG correlation inequality to derive an improved analysis of randomized rounding on them. This yields a pessimistic estimator, thus presenting deterministic polynomial-time algorithms for them with approximation guarantees that are significantly better than those known. | Introduction
Several important NP-hard combinatorial optimization problems such as basic
problems on graphs and hypergraphs, can be posed as packing/covering integer
programs; the randomized rounding technique of Raghavan & Thompson is a
powerful tool to approximate them well [21]. We present an elementary property
of all these IPs-positive correlation-and use the FKG inequality (Fortuin,
Kasteleyn & Ginibre [10], Sarkar [22]) to derive an improved analysis of randomized
rounding on them. Interestingly, this yields a pessimistic estimator, thus
presenting deterministic polynomial algorithms for them with approximation
guarantees significantly better than those known, in a unified way.
1.1 Previous work
Let Z+ and !+ denote the non-negative integers and the non-negative reals
respectively. For a (column) vector v, let v T denote its transpose, and v i stand
for its ith component. We first define the packing and covering integer programs.
1, a packing (resp. covering) integer program PIP (resp. CIP) seeks to maximize
subject to x
Furthermore if A 2 f0; 1g n\Thetam , we assume that each entry of b is integral. We
Though there are usually no restrictions on the entries of A; b and c aside
of non-negativity, it is easily seen that the above restrictions are without loss
of generality (w.l.o.g.), because of the following. First, we may assume that
ij is at most b i . If this is not true for a PIP, then we may as well set
this is not true for a CIP, we can just reset A ij := b i . Next, by scaling
each row of A such that for each row i and by scaling c so that
we get the above form for A, b and c. Finally, if A 2 f0; 1g n\Thetam ,
then for a PIP, we can always reset b i := bb i c for each i and for a CIP, reset
hence the assumption on the integrality of each b i , in this case.
Remark. The reader is requested to take note of the parameter B; it will
occur frequently in the rest of the paper. Whenever we use the symbol B as a
parameter for any given problem, it will be so since, in the "natural" PIP/CIP
formulation of the problem, B will play the same role as it does in Definition 1.
As mentioned above, PIPs and CIPs model some basic problems in combinatorial
optimization, but most of these problems are NP-hard; hence we
are interested in efficient approximation algorithms for PIPs and CIPs, with
a good performance guarantee. We now turn to an important technique for
approximating integer linear programs-"relaxing" their integrality constraints,
and considering the resulting linear program.
2 The standard LP relaxation of PIPs/CIPs lets x
PIP/CIP, x and y denote, resp., an optimal solution to, and the optimum
value of, this relaxation. (For packing, we also allow constraints of the form
set of positive integers fd i g; the LP relaxation sets
Given a PIP or a CIP, we can solve its LP relaxation efficiently. However,
how do we handle the possibility of possibly fractional entries in x ? We need
some mechanism to "round" fractional entries in x to integers, suitably. One
possibility is to round every fractional value x
i to the closest integer, with some
tie-breaking rule if x
i is half of an integer. However, it is known that such
"thresholding" methods are of limited applicability.
A key technique to approximate a class of integer programming problems
via a new rounding method-randomized rounding-was proposed in [21]. Given
a positive real v, the idea is to look at its fractional part as a probability-
round v to bvc +1 with probability round v to bvc with probability
bvc. This has the nice property that the expected value of the result is
v. How can we use this for packing and covering problems? Consider a PIP, for
instance. Solve its LP relaxation and set x 0
to be fixed later; this scaling down by ff is done to boost the chance that the
constraints in the PIP are all satisfied-recall that they are all -constraints.
Now define a random z
, the outcome of randomized rounding, as follows.
Independently for each i, set z i to be bx 0
We now need to show that all the constraints in the PIP are satisfied and
that c T \Delta z is not "much below" y , with reasonable probability; we also need
to choose ff suitably. This is formalized in [21] as follows. As seen above, an
important observation is that E[z i
. Hence,
and
For some fi ? 1 to be fixed later, define events
z is an (fffi)-approximate solution
to PIP if n+1 -
holds. How small a value for (fffi) can we achieve? Bounding
we can pick ff; fi ? 1 such that
using the Chernoff-
Hoeffding (CH) bounds. This gives us an (fffi)-approximation z with nonzero
probability, which is also made deterministic by Raghavan, using pessimistic
estimators [19]. Similar ideas hold for CIPs-the fractions fx
are scaled up by
some ff ? 1 here. Similar approximation bounds are derived through different
methods by Plotkin, Shmoys & Tardos [18]. See Raghavan [20] for a survey of
randomized rounding, and Crescenzi & Kann [7] for a comprehensive collection
of results on NP-optimization problems.
Though randomized rounding is a unifying idea to derive good approximation
algorithms, there are better approximation bounds for specific key problems
such as set cover (Johnson [13], Lov'asz [14], Chv'atal [6]), hypergraph matching
(Aharoni, Erd-os & Linial [1]) and file-sharing in distributed networks (Naor &
Roth [17]), each derived through different means. One reason for this slack
stems from bounding P r(
to quote Raghavan [19],
Throughout, we naively (?) sum the probabilities of all bad events-
although these bad events are surely correlated. Can we prove a
stronger result using algebraic properties (e.g., the rank) of the co-efficient
matrix? A tighter bound for the probabilistic existence
proofs should lead to tighter approximation algorithms.
1.2 Proposed new method
We make progress in the above-suggested direction by exploiting an elementary
property-positive correlation-of CIPs and PIPs. To motivate this idea, let us
just take two constraints of a PIP, and let E 1 and E 2 be the corresponding bad
events, as defined before. For instance, suppose E 1 is the event that 0:1z 1
0:5z stands for the event that 0:4z 1 +0:3z 2 +z 5 +0:1z 6 ?
1:2, where the z i are all independent 0-1 random variables. Now suppose we
are given that E 1 holds. Very roughly speaking, this seems to suggest that
"many" among z 1 ; z 3 ; z 4 and z 6 were "small" (i.e., zero), which seems to boost
the chance that E 2 holds, too. Formally, the claim is that P r(E 2 jE 1
i.e., that P r(E 1
"intuitively clear" fact can then
be easily generalized for us to guess that
Y
In other words, (2) claims that the constraints are positively correlated-given
that all of any given subset of them are satisfied, the conditional probability
that any other constraint is also satisfied, cannot go below its unconditional
probability.
We prove (2), which seems plausible, using the FKG inequality. Thus,
Y
which is always as good as, and most often much better than, (1). (For a
detailed study of the FKG inequality, see, e.g., Graham [11] and Chapter 6 of
Alon, Spencer & Erd-os [2].)
It is not hard to verify such a property for CIPs also. Why we have been so
lucky as to have positive correlation among the constraints of PIPs and CIPs
(a very desirable form of correlation)? The features of PIPs and CIPs which
guarantee this are:
ffl All the entries of the matrix A are non-negative, and
ffl all the constraints "point" in the same direction.
Of course, it can also be shown that given that all of any given subset of the
constraints are violated, the conditional probability that any other constraint
is also violated, cannot go below its unconditional probability; but we will not
have to deal with this situation! Also, such a nice correlation as given by (2)
may not necessarily hold if the z i s are not independent.
More surprisingly, though this new approach usually only guarantees that z
is a "good" approximation with very low (albeit positive) probability-in fact, it
does not even seem to provide a randomized algorithm with any good success
probability-the structure of PIPs and CIPs implies a sub-additivity property
which yields a pessimistic estimator (a notion to be introduced in Section 2);
we thus get deterministic polynomial-time algorithms achieving these improved
approximation bounds. The problem in arriving at a good pessimistic estimator
is that while the previous estimator
(i.e., the one used in [19] and
in related papers) is upper-bounded by E[Z] (for some random variable Z) on
applying the CH bounds, such a fact does not seem to hold here. Nevertheless,
the structure of CIPs/PIPs-in particular, the two simple properties itemized
above-help in providing a good pessimistic estimator. This is a point that we
would like to stress.
Thus we get, in a unified way, improved bounds on the integrality gap
and hence, improved approximation algorithms for all PIPs and CIPs. In par-
ticular, we improve on the above-mentioned results of [13, 14, 1, 17]; our bound
is incomparable with that of [6].
1.3 Approximation bounds achieved
Our best improvements are for PIPs. For PIPs, the standard analysis of randomized
rounding guarantees integral solutions of value
n\Thetam and A 2 f0; 1g n\Thetam . Our
method
improving
well on the previous ones-e.g., in the latter case if y
an integral solution of value \Theta(n), as opposed to the
n) bound.
This method also gives Tur'an's classical theorem on independent sets in graphs
[25] to within a constant factor.
An important packing problem where A 2 f0; 1g n\Thetam is simple B-matching
in hypergraphs [14]: given a hypergraph with non-negative edge weights, finding
a maximumweight collection of edges such that no vertex occurs in more than B
of them. Usual hypergraph matching has 1, and is a well-known NP-hard
problem. To our knowledge, the only known good bound for this problem, apart
from the standard analysis of randomized rounding, was provided by the work
of [1], which focused on the special case of unweighted edges. The methods of [1]
can be used to show that if f is the minimum size of an edge in the hypergraph,
then there exists an integral matching of value at least
(y
(y
While this matches our result to within a constant factor for note that
this bound worsens as B increases, while the standard analysis, as well as our
present analysis, of randomized rounding in fact show that the integrality gap
gets better (decreases) as B increases.
For covering, we prove an
ln(nB=y )=Bg) (4)
integrality gap, and derive the corresponding deterministic polynomial-time approximation
algorithm. This improves on the
(ln n)=Bg)
bound given by the standard analysis of randomized rounding. Also, Dobson [8]
and Fisher & Wolsey [9] bound the performance of a natural greedy algorithm
for CIPs in terms of the optimal integral solution. Our bound is incomparable
with theirs, but for any given A, c, and the unit vector b=jjbjj 2 pointing in the
direction of b, our bound is always better if B is more than a certain threshold
thresh(A; b; c). See Bertsimas & Vohra [4] for a detailed study of approximating
CIPs; our work improves on all of their randomized rounding bounds except for
their weighted CIPs (wherein it is not the case that c
our bounds are incomparable with theirs.
An important subclass of the CIPs models the unweighted set cover problem:
here. The combinatorial interpretation is
that we have a hypergraph E), and wish to pick a minimum cardinality
collection of the edges so that every vertex is covered. (When viewed as an LP,
this is the "dual" of the hypergraph matching problem.) The rows correspond
to V and the columns, to E. Clearly, this problem requires that x 2 f0; 1g m ,
which is not guaranteed by Definition 1; however, note that for this problem,
any x 2 Z m
trivially yields a y 2 f0; 1g m with Ay - b and
For set cover, we tighten the constants in (4) to derive an
approximation bound. The work of Lund & Yannakakis [15] and Bellare, Gold-
wasser, Lund & Russell [3] shows a constant a ? 0 such that approximating this
problem to within a ln n is likely to take super-polynomial time. However, this
problem is important enough to study approximations parametrized by other
parameters of A; b and c, that are always as good as and often much better than,
\Theta(log n); for instance, the work of [13, 14, 6] shows a ln d+O(1) approximation
bound, where d is the maximum column sum in A-note that d - n. Also since
there is a trivial solution of size n for any set cover instance, n=y is a simple
upper bound on the approximation ratio. Our bound is a further improvement-
it is easily seen that n=y - d always, and that there is a constant ' ? 0 such
that for every non-decreasing function f(n) with
exist families of (A; b; c) such that
Thus our bound is never more than a multiplicative (1 + o(1)) or an additive
O(1) factor above the classical bound, and is usually much better; in the best
case, our improvement is by \Theta(log n= log log n). (For instance, we can construct
instances with log log n)
improvement.)
Another noteworthy class of CIPs is related to the B-domination problem:
given a (directed) graph G with n vertices, we want to place a minimumnumber
of facilities on the nodes such that every node has at least B facilities in its out-
neighborhood. This is also a key subproblem in sharing files in a distributed
system [17]; under the assumption that G is undirected and letting \Delta be its
maximum degree, an
approximation bound is presented in [17], improving on the standard analysis
of randomized rounding. Bound (4) improves further on this; in particular,
even if G is directed with maximum in-degree \Delta, (4) shows that the Naor-
Roth bound holds. Furthermore, the comments regarding the \Theta(log n= log log n)
improvement for set cover, hold even in the undirected case. All of this, in turn,
provides better bounds for the file-sharing problem.
Thus, the two main contributions of this work are as follows. The first is
the identification of a very desirable "correlation" property of all packing and
covering integer programs, which enables one to prove, quite easily, improved
bounds on the integrality gap for the linear relaxations of these problems. How-
ever, as shown in Section 4, this is often not constructive, since the probability
of randomized rounding resulting in such good approximations can be (and usually
negligibly small; Section 4 shows a simple family of instances where this
"success probability" is as small as exp(\Gamma\Omega\Gamma n+m)). The second idea, then, is to
show that the structure of PIPs and CIPs in fact presents a suitable pessimistic
estimator (see Section 2 for the definition), which, pleasingly, actually lets us
come up with such approximations efficiently.
In Section 2, we present some basic notions such as large-deviation inequal-
ities, the FKG inequality, and the notion of pessimistic estimators. Section 3
then handles PIPs. We devote Section 4 to the important problem of finding
a maximum independent set problem on graphs by looking at it in the natural
(and well-known) way as a PIP, and make some observations about this prob-
lem; these shine light on the strengths and weaknesses of our approach (and of
related approaches). Section 5 handles CIPs; a good understanding of Section 3
is essential to read this section. Section 6 concludes.
Preliminaries
Let "r.v." abbreviate "random variable" and for any positive integer k, let [k]
denote the set f1; kg. If a universe is understood,
then for any S ' N , -(S) denotes its characteristic vector: -(S) 2 f0; 1g ' with
S. For a sequence s
the vector In our usage, s could be a sequence of reals
or of random variables. As usual, e denotes the base of the natural logarithm.
Remark. Though the following pages seem filled with formulae and calcula-
tions, many of them are routine. The real ideas of this work are contained in
Lemmas 1, 5, and 6. The reader might even consider skipping the proofs of
most of the rest of the lemmas, for the first reading.
We first recall the Chernoff-Hoeffding (CH) bounds, for the tail probabilities
of sums of bounded independent r.v.s [5, 12]. Theorem 1 presents these tail
bounds; see, e.g., Motwani & Raghavan [16] for the proofs.
Theorem 1 Let independent r.v.s, each taking values in
[0; 1], with
and if 0 -
It is easily seen that
Fact 1 (a) G(-;
(d) If
Call a family F of subsets of a set N monotone increasing (resp. monotone
decreasing) if for all S ' T ' N , S 2 F implies that T 2 F (resp., T 2 F implies
that S 2 F). We next present Theorem 2, a special case of the powerful FKG
inequality [10, 22]; for a proof, see, e.g., the proof of Theorem 3.2 in Chapter 6
of [2].
Theorem 2 Given a finite set
suppose we pick a random Y ' N by placing each a i in Y indepen-
dently, with probability p i . For any F ' 2 N , let P r
any sequence of monotone increasing families, and let
any sequence of monotone decreasing families. Then,
s
s
Y
s
s
Y
Finally, we recall the notion of pessimistic estimators [19]. For our purposes,
we focus on the case of independent binary r.v.s. Let
be independent r.v.s with P r(X . Suppose, for
some implicitly defined L ' f0; 1g ' , that
How do we find some v 2 f0; 1g Theorem 3 now presents the idea of
pessimistic estimators applied to the method of conditional probabilities. See
[19] for a detailed discussion and proof.
Notation
and for any j 2 f0; 1g, define wj
Returning to the X i s, p and L, we define
is a pessimistic estimator w.r.t.
(a) U (u(i; w;
is at least
Theorem 3 [19] Let an efficiently computable U be a pessimistic estimator
w.r.t.
by breaking ties arbitrarily, the following algorithm produces a v 62 L:
For
Proof. It is not hard to see by induction on i, that 8i 2 f0g [ ['],
Using this for conjunction with property 2(a) of Definition 3, completes
the proof. 2
Approximating Packing Integer Programs
Let a PIP be given, conforming to Definition 1. We assume that x
is the
constraint on x. (Clearly, even if we have constraints such as x
we will get identical bounds since scaling down by ff ? 1 and then performing
an randomized rounding cannot make x i 62 f0; are
crucial, wherein the structure of PIPs is exploited. It is essential to read this
section before reading Section 5-most proofs are omitted in Section 5 since they
are very similar to the ones in this section.
We solve the LP relaxation, and let the scaling by ff, events
and vectors z; x 0 etc. be as in Section 1.1; ff and fi will be determined later on.
The main point of this section is to present a good candidate for a pessimistic
estimator (see (5)), and to show that it indeed satisfies the conditions of Definition
3. We may then invoke Theorem 3 to show that not only do we get
improved existential results on the integrality gap-that we can also construc-
tivize the existence proof. The work of this section culminates in Theorem 4.
We first setup some notation, to formulate our "failure probability". For
every
denote the ith row
of A. Let independent r.v.s with P r(X
. It is clear that
It is readily verified that
Our first objective is to prove (2) and hence (3), using Theorem 2; this will
then suggest potential choices for a pessimistic estimator. In the notation of
Theorem 2, 1g. For each i 2 [n], define
A little reflection shows the crucial property that each F i is monotone decreas-
ing. Noting that each i, we deduce (2) from Theorem 2. In
fact, a similar proof shows that since the components of X are picked indepen-
dently, we have
Lemma 1 For any j 2 f0g [ [m] and any w 2 f0; 1g j ,
Y
Let
In the notation of Definition 3, the set to be avoided, L, is
We are now ready to define a suitable pessimistic estimator; we first introduce
some useful notation to avoid lengthy formulae.
Notation 2 For all
When j and w are clear from the context, we might just refer to these as h i , f i
and g i .
From Theorem 1 and Lemma 1, a natural guess for a pessimistic estimator,
might be
Y
However, this might complicate matters if h i (j; w) ? 1 and hence we first define
We now define U (u(j; w; p)), 8j 2 f0g [ [m] 8w 2 f0; 1g j , to be
Y
To make progress toward proving that U is a pessimistic estimator w.r.t. X
and L, we next upper-bound P r(E i ) for each i. Recall, by Theorem 1, that for
each
upper-bounds these quantities.
(a) For every i 2 [n],
Proof. (a) Note that Subject to
these constraints and that ff ? 1, we will show that G(- maximized when
prove (a). Now,
If A i \Delta s is held fixed at some fl - 0, (6) is maximized at -
under the constraint that - i 2 [0; \Delta]. Thus,
which is readily shown to be maximized when A i similar proof holds
for (b). 2
Now that we have good tail bounds, we set ff; fi ? 1 such that (fffi) is "small"
and such that for the PIP,
(property (1) of Definition 3). Note that the bound of Lemma 3 makes sense
only handles the common case where A i;j 2 f0; 1g 8i; j, to
get improved bounds which, in particular, work even if We have not
attempted to optimize the constants.
Lemma 3 There exist constants c 1 - 3 and c 2 - 1 for PIP, such that if
Proof. By Lemma 2, it suffices to show that H(y =ff;
Furthermore, Fact 1(a) shows that
e \Gammay
suffices. Now since B - 1 and there exists a fixed d ? 0
such that
and hence, it suffices if y =(8ff) ? nde \GammaB(ln ff\Gamma1) . Solving for ff gives the claimed
Lemma 4 There exists a constant c 1 - 3 for PIP instances with A i;j 2 f0; 1g
8i; j, such that if
Proof. Note, since A i;j 2 f0; 1g, that for any i 2 [n],
essentially gets replaced by B+1 in Lemma 3, leading
to the strengthened bounds. 2
As remarked in the introduction, it can be seen that the bounds (on the
approximation ratio (fffi)) of Lemmas 3 and 4 significantly strengthen the corresponding
bounds achievable by the standard analysis of randomized rounding.
At this point, we have exhibited suitable ff and fi such that our function U
satisfies properties (1) and 2(a) of Definition 3. We now turn to proving property
2(b), which is more interesting. Before showing Lemma 6 which proves this, we
first establish a simple lemma which facilitates the proof of Lemma 6.
Lemma 5 For all
Proof. We drop the parameters j and w for the rest of the proof. Part (i)
is easily seen. For part (ii), we first note that
by the definition of these quantities. Now if h
(7) and hence part (ii) above follows from (7), with equality. Instead if h
and furthermore, that
part (ii) follows from (7). Finally if h i - 1, note that h 0
implying (ii) again. 2
Remark. In most previous constructions of pessimistic estimators for various
analyses, equality actually holds in part (ii) of Lemma 5 (as opposed to our
"-"). This then makes it quite easy to prove that the function on hand is a
valid pessimistic estimator. Our task is made more challenging because of this
change in our case.
Lemma 6 For any j 2 f0g [
Thus in particular,
Proof. Let
convenience. Note that
Omitting the parameters j and w in f i , g i etc., it is thus sufficient to show that
Y
Y
Y
Thus from Lemma 5(ii) and since h 0
suffices to show that
Y
Y
Y
which we now prove by induction on n.
Equality holds in (8) for the base case 1. We now prove (8) by assuming
its analogue for show that
(1\Gammag 0
Y
Y
(1\Gammag 0
Simplifying, we need to show that
which holds in view of Lemma 5(i). 2
By now, we have fulfilled all the requirements of Definition 3 and thus present
Theorem 4 There exist constants c 3 ; c 4 ? 0 such that given any PIP conforming
to the notation of Definition 1, we can produce, in deterministic polynomial
time, a feasible solution to it, of value at least
If A 2 f0; 1g m\Thetan , the guarantee on the solution value is at least
Proof. Lemmas 3 and 4 show property (1) of Definition 3. Properties 2(a)
and 2(b) of Definition 3 are shown by Lemmas 1 and 6 respectively. Theorem 3
now completes the proof. 2
4 The Maximum Independent Set Problem on
Graphs
We consider the classical NP-hard problem of finding a maximum independent
set (MIS) in a given undirected graph E), and pose it naturally as a
packing problem. Though we do not get improved approximation algorithms
for this problem, a few observations on this important problem are relevant, as
we shall see shortly.
Tur'an's classical theorem [25] shows that G always has an independent set
of size at least jV such a set can also be found in polynomial
time. The standard packing formulation described below, combined with our ap-
proach, shows the existence of an independent set of
=jEj). The constant
factor hidden in
the\Omega\Gamma \Delta) is weaker than that of Tur'an's theorem however-
our reason for presenting this result is just to show that our approach proves a
few other known results too, in a unified way. We remark that we do not use the
standard notation of graphs having n vertices and m edges, as it will go against
our notation for PIPs and CIPs-the packing formulation has jEj constraints and
Define an indicator variable x for each vertex i, for the presence of
vertex i in the independent set (IS). Subject to the constraint that x i
for every edge (i; j), we want to maximize
. For specific problems like
this, we can get better bounds than does the analysis for Theorem 4, which
uses the general CH bounds. The fractional solution x
for each i, is
optimal to within a factor of 2. Suppose we scale x down by some ff ? 1
and do the randomized rounding as before. Then for any given edge (i; j),
much better than the CH bound. Analysis
as above then shows that producing
an IS of
One reason for our considering the MIS problem is to show that the failure
probability given by (3) can be extremely close to (though strictly smaller than)
1. This would then underscore the importance of the fact that a pessimistic
estimator can be constructed for PIPs and CIPs. Suppose the graph E)
is a line on the N vertices and that each vertex independently picks
a random bit for itself with the bit being one with probability q, for some
pN be the probability that no two adjacent vertices choose the
bit "1". Setting it is then clear that the
probability that randomized rounding (with the above values for ff and fi) picks
a valid IS in G, equals pN . We now proceed to show that pN is exponentially
small in N , validating our point.
Computing pN by induction on N is standard. Let aN (resp., b N ) denote
the probability that not only do no pair of adjacent vertices both choose "1",
but also that vertex N chooses the bit "1" (resp., 0). Note that
The recurrences
are immediate. Letting -
can then be seen that
Using the facts b we then see that
extremely small. Thus, the success probability of randomized
rounding with our chosen values for ff and fi can be (and usually is) extremely
small, motivating the need for a good pessimistic estimator.
The MIS problem also illustrates the well-known fact that linear relaxations
are not tight in general. As seen above, this problem always has a fractional
solution lying between jV j=2 and jV j. However, the graph G can have its independence
number to be any integer in [jV j] and hence, the integrality gap
of this LP formulation can be quite bad. Furthermore, recent breakthrough
work has shown that the MIS cannot be approximated fast to within any factor
better than jV j ffl for some fixed ffl ? 0, unless some unexpected containment
result holds in complexity theory. This shows that we cannot expect very good
approximation algorithms for all PIPs.
Approximating Covering Integer Programs
Given a CIP conforming to Definition 1, we show how to get a good approximation
algorithm for it. Since most ideas here are very similar to those of
Section 3, we borrow a lot of notation from there, skim over most details and
just present the essential differences.
The idea here is to solve the LP relaxation, and for an ff ? 1 to be fixed later,
to set x 0
j , for each j 2 [m]. We then construct a random integral solution
z by setting, independently for each j 2 [m], z
be as in Section 3. The bad events now are
Analogously to PIPs,
For any i 2 [m], let
Each of these families is monotone increasing now, and thus Theorem 2 again
guarantees Lemma 1, for the present definition of
Suppose we define, given some
analogously as in Notation 2:
As can be expected, the pessimistic estimator U (u(j; w; p)), 8j 2 f0g[ [m] 8w 2
, is now
Y
Now for the analogue of the important Lemma 6. It is easily checked that
Lemma 5(ii) holds again and that instead of part (i) of Lemma 5, we have
Thus, (11) guarantees even now! This shows that Lemma 6 holds for the
current definition of U also.
Thus to establish that U is a pessimistic estimator, we only have to exhibit,
as do Lemmas 3 and 4, ff; fi ? 1 which ensure that U (p first
present a lemma similar to Lemma 2, whose proof is simple and omitted.
Lemma 7 For all
G(y ff;
We now present the main theorem on covering problems. Since set cover
is an important problem, we present the precise approximation bound for this
problem as a distinct part of the theorem.
Theorem 5 Given a CIP conforming to the notation of Definition 1, we can
produce, in deterministic polynomial time, a feasible solution to it with value at
most
y (1 +O(maxfln(nB=y )=B;
ln(nB=y )=Bg)):
For the unweighted set cover problem, we can improve this to
Proof. For general CIPs, there are two cases: ln(nB=y )=B is at least one
or at most one. In the former case, we set
For the latter case, we set both ff and fi to be of the form
The proofs follow from standard CH bound analysis using Theorem 1 and Fact 1
with Lemma 7, and the details are omitted.
For the important unweighted set cover problem (see Section 1.3 for the
definition), we observe that for any i 2 [n],
makes the calculations easier. If A i has j non-zeroes (ones) in it, say in columns
then it is not hard to see that P maximized when
Thus,
and hence, by Lemma 7, it suffices to pick ff; fi - 1 such that
G(y ff;
Now since ff - 1,
Also if we agree to make fi - 2, we then have
G(y ff;
by Fact 1(b). So from (12), it suffices to choose ff - 1 and 1 - fi - 2 such that
nde \Gammaff - y
It can now be verified that by choosing
for some suitable positive constants a 1 and a 2 , we will satisfy (13). Hence, the
approximation guarantee fffi can be made as small as
worth looking at some concrete improvements brought about by
Theorem 5, over existing algorithms. In the case of unweighted set cover, suppose
d - n is the maximum column sum-the maximum cardinality of any edge
in the given hypergraph. Then, by just summing up all the constraints, we can
see that
y d - n: (14)
Thus, our approximation bound for the set cover problem-see the second statement
of Theorem 5-is never more by a multiplicative (1 + o(1)) or an additive
O(1) factor above the classical bound of
minfn=y ;
On the other hand, n=y - d is quite likely, and it is easy to construct set cover
instances with
log log n) ln(n=y
For instance, we can arrange for just a few edges to have the maximum edge
size of n \Theta(1) , while keeping y as high as
n= log \Theta(1) n:
Thus in the best case, we get a \Theta(log n= log log n) factor improvement in the
approximation ratio. An important case of the unweighted set cover problem is
the dominating set problem: given a (directed) graph G, the problem is to pick
a minimum number of vertices such that for every one vertex v, at least one
vertex in v [ Out(v) is picked, where Out(v) denotes the out-neighborhood of
v.
We next consider a more general domination-type problem on graphs, modeling
a class of location problems. Given a (directed) graph G with n nodes and
some integral parameter B - 1, we have to place the smallest possible number
of facilities on the nodes of G, so that every node has at least B facilities in its
out-neighborhood-multiple facilities at the same node are allowed.
For the case where G is undirected with maximum degree \Delta, an approximation
bound of
is presented in [17], improving on the
bound given by the standard analysis of randomized rounding. For us, Theorem
5 gives a bound of
ln(nB=y )=Bg):
Even if G is directed, this new bound is as good or better than
in )=Bg);
where \Delta in denotes the maximum in-degree of G; this is easily seen from the
fact that
which follows from the same reasoning as for (14). We thus get a generalization
of the Naor-Roth result. In the case of undirected graphs, it is not hard to show
families of graphs for which the present bound is better than that of Naor &
Roth's by a factor of upto \Theta(log n= log log n).
In addition to its independent interest, the above problem is a crucial sub-problem
in the following file-sharing problem in distributed networks [17]. Given
an undirected graph G with maximum degree \Delta and a file F of B bits, F must
be stored in some way at the nodes of G, such that every node can recover F
by examining the contents of its neighbor's memories; the aim is to minimize
the total amount of memory used. (Note that solving the above domination
problem is not sufficient for this task.) An approximation bound of
is presented in [17] for this problem. Letting y be the optimum of the above
domination problem on G, we derive an approximation bound of
which is always as good as (15), and better if B AE ln(\Delta).
6 Concluding Remarks
We have presented a simple but very useful property of all packing and covering
integer programs-positive correlation. This naturally suggests a better
way of analyzing the performance of randomized rounding on PIPs and CIPs.
However, the provable probability of success-of satisfying all the constraints
and delivering a very good approximation-can be extremely low; so, in itself,
this approach may just prove an existential result. Fortunately, the structure
of PIPs and CIPs in fact suggests a pessimistic estimator, thus converting this
existence proof into a (deterministic) polynomial-time algorithm. In our view,
this is very interesting, and gives evidence of the utility of de-randomization
techniques. A common objection to de-randomization is that often, it converts
a fast randomized algorithm that has a good probability of success, to a somewhat
slower deterministic algorithm. However, note that the opposite is true
here! The randomized algorithm suggested by the existence proof can have an
extremely low probability of success; second, solving the LP relaxation heavily
dominates the running time, and the time for running the de-randomization is
comparatively negligible. (This observation about running the LP relaxation,
also suggests that in practice, it would be better to quickly get an approximately
optimal solution to the LP relaxation, since we are anyway dealing with
approximate solutions.)
Another conclusion is that studying correlations helps; this is a well-known
fact in number theory and statistical physics, for instance. In the case of PIPs
and CIPs, we have benefited from the fact that the constraints "help each other",
by being positively correlated. The precise reasons for such a correlation are
spelled out in Section 1.2. It is a challenging open question to use the structure
of correlations in more complicated scenarios; one such problem is the
set discrepancy problem [23, 2]. Given a system of n subsets
a ground set A with n elements, the problem is to come up with a function
such that the discrepancy
is "small", where
While randomized rounding and the method of conditional probabilities can
be used to produce a / with discrepancy O(
log n) [23, 2], a classical non-constructive
result of Spencer shows the existence of a / with
n)
[24]. This is best possible, and it is an important open problem to make this
constructive. If we write down the natural integer programming formulation for
this problem, we can see that each constraint is positively correlated with some
subsets of the constraints, and negatively correlated with others. (There is the
associated observation that in several IPs with both - and - constraints, the -
constraints are often positively correlated amongst each other; similarly for the
- constraints. This idea could potentially bring improvements in some cases.)
It would be very interesting if such more complicated forms of correlation can
be used to get a constructive result here.
Yet another potential room for improvement lies in lower-bounding, in the
context of (2), the ratio
Y
at least for some particular classes of PIPs/CIPs. We know this ratio to be at
least one, by (2); a better lower bound (at least for particular problems) will
lead to better bounds on the integrality gap. Roughly speaking, such better
lower bounds seem plausible especially for PIPs/CIPs wherein "several" columns
have "several" nonzero entries, i.e., in situations where there is heavy (positive)
correlation among the constraints of the IP. This could however be a difficult
problem.
How far can such ideas be pushed? In the general setting of all PIPs and
CIPs, not much progress seems to be possible along these lines, as shown in Section
4. It would however be very interesting to improve our bounds for particular
important problems such as for the edge-disjoint paths problem on graphs. Fur-
thermore, it would be very interesting to study the correlations involved in other
relaxation approaches such as semi-definite programming relaxations.
Finally, as we had seen before, our bounds are incomparable with known
results for some weighted CIPs, e.g., those considered in [6, 4]. It would be
interesting if our method could be extended to include these results also.
Acknowledgements
We thank Moni Naor, Babu Narayanan and David Shmoys for their valuable
comments. Thanks in particular to Prabhakar Raghavan for his insightful suggestions
and pointers to the literature. Thanks are also due to the STOC 1995
program committee and anonymous referee(s) for their helpful comments on the
written style.
--R
Optima of dual integer linear pro- grams
The Probabilistic Method.
Efficient probabilistically checkable proofs and applications to approximation.
Linear programming relaxations
A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations.
A greedy heuristic for the set covering problem.
"La Sapienza"
On the greedy heuristic for continuous covering and packing problems.
Correlational inequalities for partially ordered sets.
Application of the FKG Inequality and its Relatives.
Probability inequalities for sums of bounded random vari- ables
Approximation algorithms for combinatorial problems.
On the ratio of optimal integral and fractional covers.
On the hardness of approximating minimization problems.
Randomized Algorithms.
Optimal file sharing in distributed networks.
Probabilistic construction of deterministic algorithms: approximating packing integer programs.
Randomized approximation algorithms in combinatorial op- timization
Randomized rounding: a technique for provably good algorithms and algorithmic proofs.
Some lower bounds of reliability.
Ten Lectures on the Probabilistic Method.
Six standard deviations suffice.
On an extremal problem in graph theory.
--TR
--CTR
Anupam Datta , Sidharth Choudhury , Anupam Basu, Using Randomized Rounding to Satisfy Timing Constraints of Real-Time Preemptive Tasks, Proceedings of the 2002 conference on Asia South Pacific design automation/VLSI Design, p.705, January 07-11, 2002
Benjamin Doerr, Non-independent randomized rounding, Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms, January 12-14, 2003, Baltimore, Maryland
Aravind Srinivasan, The value of strong inapproximability results for clique, Proceedings of the thirty-second annual ACM symposium on Theory of computing, p.144-152, May 21-23, 2000, Portland, Oregon, United States
Aravind Srinivasan, On the approximability of clique and related maximization problems, Journal of Computer and System Sciences, v.67 n.3, p.633-651, November
Yossi Azar , Iftah Gamzu , Shai Gutner, Truthful unsplittable flow for large capacity networks, Proceedings of the nineteenth annual ACM symposium on Parallel algorithms and architectures, June 09-11, 2007, San Diego, California, USA
Benjamin Doerr, Non-independent randomized rounding and coloring, Discrete Applied Mathematics, v.154 n.4, p.650-659, 15 March 2006
Stavros G. Kolliopoulos , Neal E. Young, Approximation algorithms for covering/packing integer programs, Journal of Computer and System Sciences, v.71 n.4, p.495-505, November 2005
Alon , Dana Moshkovitz , Shmuel Safra, Algorithmic construction of sets for k-restrictions, ACM Transactions on Algorithms (TALG), v.2 n.2, p.153-177, April 2006
Stavros G. Kolliopoulos, Approximating covering integer programs with multiplicity constraints, Discrete Applied Mathematics, v.129 n.2-3, p.461-473, 01 August
Patrick Briest , Piotr Krysta , Berthold Vcking, Approximation techniques for utilitarian mechanism design, Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, May 22-24, 2005, Baltimore, MD, USA
Aravind Srinivasan, New approaches to covering and packing problems, Proceedings of the twelfth annual ACM-SIAM symposium on Discrete algorithms, p.567-576, January 07-09, 2001, Washington, D.C., United States | integer programming;linear programming;positive correlation;combinatorial optimization;correlation inequalities;approximation algorithms;packing integer programs;derandomization;rounding theorems;covering integer programs;linear relaxations;randomized rounding |
333225 | Image Sequence Analysis via Partial Differential Equations. | This article deals with the problem of restoring and motion segmenting noisy image sequences with a static background. Usually, motion segmentation and image restoration are considered separately in image sequence restoration. Moreover, motion segmentation is often noise sensitive. In this article, the motion segmentation and the image restoration parts are performed in a coupled way, allowing the motion segmentation part to positively influence the restoration part and vice-versa. This is the key of our approach that allows to deal simultaneously with the problem of restoration and motion segmentation. To this end, we propose a theoretically justified optimization problem that permits to take into account both requirements. The model is theoretically justified. Existence and unicity are proved in the space of bounded variations. A suitable numerical scheme based on half quadratic minimization is then proposed and its convergence and stability demonstrated. Experimental results obtained on noisy synthetic data and real images will illustrate the capabilities of this original and promising approach. | Introduction
Automatic image sequence restoration is clearly a
very important problem. Applications areas include
image surveillance, forensic image process-
ing, image compression, digital video broadcast-
ing, digital -lm restoration, medical image pro-
cessing, remote sensing example, the
recent work done within the European projects,
fully or in part, involved with this important
problem (Automated Restoration of
Film and Video Archives), NOBLESSE 2 (Nonlin-
Model-Based Analysis and Description of Images
for Multimedia Application), IMPROOFS 3
(IMage PROcessing Operations for Forensic Sup-
sequence restoration is tightly
coupled to motion segmentation. It requires to
extract moving objects in order to separately restore
the background and each moving region
along its particular motion trajectory. Most of
the work done mainly involves motion compensated
temporal -ltering techniques with appropriate
2D or 3D Wiener -lter for noise suppres-
sion, 2D/3D median -ltering or more appropriate
?morphological operators for removing impulsive
noise [16, 38, 39, 31, 27, 52, 19, 17]. However,
and due to the fact that image sequence restoration
is an emerging domain compared to 2D image
restoration, the literature is not so abundant
than the one related to the problem of restoring
just a single image. For example, numerous PDE
based algorithms have been recently proposed for
noise removal, 2D image enhancement and 2D image
restoration in real images with a particular
emphasis on preserving the grey level discontinuities
during the enhancement/restoration process.
These methods, which have been proved to be very
eOEcient, are based on evolving nonlinear partial
dioeerential equations (PDE's) (See the work of
Alvarez et al [4], Aubert et al. [8], Chambolle
Lions [21], Chan [14, 67] Cohen [23], Cottet &
Germain [24], Kornprobst & Deriche [44, 43, 42],
Morel [3, 51], Nordstr#m [54], Osher & Rudin [60],
Perona & Malik [58], Proesman et al. [59], Sapiro
et al. [20, 61, 62, 12, 63], Weickert [71, 72], You et
al. This methodology provides several
advantages. Firstly, we can justify on a theoretical
point of view the model, using the theory of viscosity
solutions or the calculus of variations. Sec-
ondly, it provides some suitable numerical schemes
for which convergence may be proved. Finally, it
permits to obtain results of high quality.
It is the aim of this article to consider the important
problem of image sequence restoration by
applying this PDE based methodology, which has
been proved to be very successful in anisotropically
restoring images. To our knowledge, few
litterature exists on the analysis of sequences of
images using Partial Dioeerential Equations. How-
ever, we mention that this methodology has been
previously used in the context of multiscale analysis
of movies (see the works of Guichard [35] and
Moisan [50]). In all this work, we will assume that
the background is static. We recall that the background
will be de-ned as the most often observed
part over the sequence. Our goal will be to obtain
the motion segmentation and the restored background
Therefore, considering the case of an image sequence
with some moving objects, we have to
consider both motion segmentation and image
restoration problems. Usually, these two problems
are treated separately in image sequence analysis.
However, it is clear that these two problems should
be treated simultaneously in order to achieve better
results. This is the key of our approach that
allows to deal simultaneously with the problem of
restoration and motion segmentation.
The organization of the article is as follows.
In Sect. 2, we make some precise recalls about
one of our previous approach for denoising a single
image [26, 8, 43]. The formalism and the methods
introduced will be very useful in the sequel.
Section 3 is then devoted to the presentation of
our new approach to deal with the case of noisy
images sequences. We formulate the problem into
an optimization problem.
The model is theoretically justi-ed in Sect. 4
we prove the existence and the unicity of the
solution to our problem in the space of bounded
variation functions.
A suitable algorithm is then proposed in Sect.
5 to approximate numerically the solution. We
prove its convergence and its stability.
We propose in Sect. 6 some experimental results
obtained on noisy synthetic and real data that will
illustrate the capabilities of this new approach.
We conclude in Sect. 7 by recalling the speci-
-cities of that work and giving the future developments
2. Restoring a single image
In Sect. 2.1, we recall a classical method in image
restoration formulated as a minimization problem
[26, 11, 8]. Section 2.2 presents a suitable
algorithm called the half quadratic minimization
which will also be used in the sequel.
2.1. A Classical Approach for Image Restoration
be a given noisy image de-ned for
2\Omega ae R 2 which corresponds to the
domain of the image. r: is the gradient operator.
We search for the restored image as the
solution of the following minimization problem :
I
Z\Omega
Z\Omega
OE(jrI j)dx
is the usual euclidian norm, ff r is a constant
and OE is a function still to be de-ned. Notice
that if we recognize the Tikhonov-
Arsenin regularization term [68]. How can we interpret
this minimization with this choice? In fact,
we search for the function I which will be simultaneously
close to the initial image N and smooth
(since we want the gradient as small as possible).
However, this method is well known to smooth
the image isotropically without preserving discontinuities
in intensity. The reason is that with
the quadratic function, gradients are too much
penalized. One solution to prevent the destruction
of discontinuities but allows for isotropically
smoothing uniform areas, is to change the above
quadratic term. This point have been widely discussed
[13, 53, 64, 66, 11, 8]. We refer to [26]
for a review. The key idea is that for low gra-
dients, isotropic smoothing is performed, and for
high gradient, smoothing is only applied in the
direction of the isophote and not across it. This
condition can be mathematically formalized if we
look at the Euler-Lagrange Equation (2), associated
to energy (1) :
Notice that Neumann conditions are imposed on
the boundaries. Let us concentrate on the divergence
term associated to the term 2 of (1). If
we note
the normal vector to j
1), we can show that [26] :
div
cj
I jj (3)
where I jj (respectively I - ) denotes the second order
derivate in the direction j (respectively -). It
is interesting to notice that most dioeusions operators
used for image restoration may also be decomposed
as the weighted sum of the second directional
derivatives I - and I jj . We refer to [41]
for more details. As for operator (3), if we want
a good restoration as described before, we would
like to have the following properties
lim
lim
But it is clear that the two conditions in (5) are
incompatible. So, we will only impose for high
gradients [26, 11, 8] :
Many functions OE have been proposed in the literature
that comply with the conditions (4) and
(see [26]). From now on, OE will be a convex
function with linear growth at in-nity which veri-
-es conditions (4) and (6). For instance, a possible
choice could be the hypersurface minimal function
In that case, existence and unicity of problem
(1) has recently been shown in the Sobolev space
2.2. The Half Quadratic Minimization
Solving directly the minimization problem (1) by
solving directly its Euler Lagrange equation (2),
is something hard because this equation is highly
non linear.
To overcome the diOEculty, the key idea is to
introduce a new functional which, although de-
-ned over an extended domain, has the same
minimum in I and can be manipulated with linear
algebraic methods. The method is based on
the half quadratic minimization theorem, inspired
from Geman and Reynolds [30]. The general idea
is that under some hypotheses on OE (mainly OE(
concave), we can write it as an in-mum :
d
where d will be called the dual variable associated
to x, and where /(\Delta) is a convex and decreasing
function. We refer to the Appendix A for more
details. We can verify that the function proposed
in (7) can be written as in (8). Consequently, the
problem (1) is now to -nd I and its dual variable
d I minimizing the functional F(I ; d I ) de-ned by :
Z
Z
\Omega (d I jrI
It is easy to check that for a -xed I , the functional
F is convex in d I and for a -xed d I , it is convex in
I . These properties are used to perform the algorithm
which consists in minimizing alternatively
in I and d I :
I
I
I
d n+1
d I
To perform each minimization, we simply solve the
Euler-Lagrange equations, which can be written as
I
I
d n+1
with discretized Neumann conditions at the
boundaries. Notice that (13) gives explicitly d n+1
I
while for (12), for a -xed d n
I , I n+1 is the solution
of a linear equation. After discretizing in space,
we have that (I n+1
(i;j)2\Omega is solution of a linear
system which is solved iteratively by the Gauss-Seidel
method for example. We refer to the Appendix
B for more details about the discretization
of the divergence operator. We also mention that
the convergence of the algorithm has been proved
[69].
3. Dealing with Noisy Images Sequences
denotes the noisy images sequence
for which the background is assumed to be static.
A simple moving object detector can be obtained
using a thresholding technique over the
inter-frame dioeerence between a so-called reference
image and the image being observed. Decisions
can be taken independently point by point
[73]. More complex approaches can also be used
[55, 57, 56, 1, 36, 45, 16, 38, 39, 31, 27, 52]. How-
ever, in our application, we are not just dealing
with a motion segmentation problem neither just
a restoration problem. In our case, the so-called
reference image is built at the same time while
observing the image sequence. Also, the motion
segmentation and the restoration are done in a
coupled way, allowing the motion segmentation
part to positively inAEuence the restoration part
and vice-versa. This is the key of our approach
that allows to deal simultaneously with the problem
of restoration and motion segmentation.
We -rst consider that the data is continuous in
time. This permits us to present the optimization
problem we want to study (Sect. 3.1). In Sect.
3.2, we rewrite the problem when the sequence is
given only by a -nite set of images. This leads to
the Problem 2.
3.1. An Optimization Problem
denotes the noisy images sequence
for which the background is assumed to be static.
Let us describe the unknown functions and what
we would like them ideally to be :
restored background,
the sequence which will indicate
the moving regions. Typically, we would like that
if the pixel belongs to a
moving object at time t, and 1 otherwise.
Our aim is to -nd a functional depending on
so that the minimizers
verify previous statements. We propose to solve
the following problem :
Problem 1. Let N(x 1 the given noisy image
sequence. We search for the restored background
and the motion segmented sequence
as the solution of the following
minimization problem :
Z\Omega
Z
Z\Omega
Z\Omega
c
Z
Z\Omega
where OE 1 and OE 2 are convex functions that comply
conditions (4) and (6) , and ff c ; ff r
c are positive
constants. We will specify later the spaces over
which the minimization runs.
Getting the minimum of the functional means that
we want each term to be small, having in mind the
phenomena of the compensations.
The term 3 is a regularization term. Notice
that the functions OE 1 ,OE 2 have been chosen as in
Sect. 2 so that discontinuities may be kept.
If we consider the term 2, this means that
we want the function C(x 1 to be close to
one. In our interpretation, this means that we
give a preference to the background. This is
physically correct since the background is visible
most of the time. However, if the data
too far from the supposed background
t, then the dioeerence
will be high, and to
compensate this value, the minimization process
will force to be zero. Therefore, the
function can be interpretated as a motion
detection function. Moreover, when searching
for we will not take into account
1). This
exactly means that B(x 1 will be the restored
image of the static background.
What about regularizing in time
the functions? As we can notice, the term 3
is a spatial smoothing term and we may suggest
to add some temporal smoothing for the sequence
C. However, there are two diOEculties to keep in
ffl the sequence has to be well sampled in time.
This temporal regularization term will have no
real interpretation in cases of images taken at
very large times (as it can be the case in video-
surveillance).
ffl In the same spirit as before, the discretization
of the regularization operator (in time) will
be hard because it will depend strongly on the
kind of movement in the sequence. In fact this
kind of regularization is, in some way, equivalent
to -nd the optical AEow, which we wanted to avoid.
We can think that this term could be usefull and
well discretized in a multiscale approach.j
3.2. The Temporal Discretized Problem
In fact, we have only a -nite set of images. Con-
sequently, we are going to rewrite the Problem 1,
taking into account that the sequence N(x 1
is represented during a -nite time by T images
noted
Problem 2. Let N be the noisy se-
quence. We search for B and C as the
solution of the following minimization problem :
Z\Omega
Z\Omega
Z\Omega
c
Z\Omega
Before going further, one may be interested in
the link between this method and the variational
method developed for image restoration in section
2. To this end, let us consider a sequence of the
same noisy image. More generally, we can consider
a sequence of the same static image corrupted with
dioeerent noises. If we admit the interpretation of
the functions C h , we will have C h j 1. After few
computations, (15) may be re-written :
Z\Omega
ff r
Z\Omega
Consequently, if we observe the energy (1) proposed
for the image restoration problem, we can
consider B as the restored version of the mean in
time of the sequence. Notice that if the sequence
is simply T times the same image, both methods
correspond exactly. Therefore, this model devoted
to sequences of images can be considered as a natural
extension of the previous one for single image
restoration.
Now that we have justi-ed the proposed model,
let us prove that it is mathematically well posed.
It is the purpose of the next section.
4. A Rigorously Justified Approach in
The Space of Bounded Variations
Section 4.1 presents the mathematical background
of our problem : the space of bounded variations
which is suitable to most problems in vision
[60, 22]. Roughly speaking, the idea is to generalize
the classical Sobolev space W
1;1(\Omega\Gamma so that
discontinuities along hypersurfaces may be consid-
ered. After having precisely speci-ed the problem
in Sect. 4.2, we -rst prove the existence of a solution
in a constrained space (See Sect. 4.3). Using
this result, we -nally prove the existence and the
unicity of a solution over the space in bounded
variations in Sect. 4.4.
4.1. The Space
In this section we only recall main notations and
de-nitions. We refer to [2, 28, 33, 29, 75] for the
complete theory.
Let\Omega be a bounded open set in R N , with
Lipschitz-regular boundary
@\Omega . We denote by L N
or dx the N-Lebesgue dimensional measure in R N
and by H ff the ff\Gammadimensional Hausdoroe measure.
We also set jEj = L N (E), the Lebesgue measure
of a measurable set E ae R N .
B(\Omega\Gamma denotes the
family of the Borel subsets
of\Omega . We will respectively
denote the strong, the weak and weak? convergences
in a space
V(\Omega\Gamma by \Gamma\Gamma\Gamma!
, \Gamma\Gamma\Gamma*
\Gamma\Gamma\Gamma*
Spaces of vector valued functions will be noted by
bold characters.
Working with images requires that the functions
that we consider can be discontinuous along
curves. This is impossible with classical Sobolev
spaces such as W 1;1
(\Omega\Gamma . This is why we need
to use the space of bounded variations (noted
Z\Omega
where C 1(\Omega\Gamma is the set of dioeerentiable functions
with compact support
in\Omega . We will note :
sup
aeZ\Omega
oe
If
and Du is the gradient in the sense
of distributions, then Du is a vector valued Radon
measure and
jDuj(\Omega\Gamma is the total variation of Du
on\Omega . The set of Radon measure is noted
The product topology of the strong topology of
for u and of the weak? topology of measures
for Du will be called the weak? topology of BV ,
and will be denoted by BV \Gamma w?.
\Gamma\Gamma\Gamma*
We recall that every bounded sequence in
admits a subsequence converging in BV \Gamma w?.
We de-ne the approximate upper limit u
and the approximate lower limit
lim
lim
is the ball of center x and radius
ae. We denote by S u the jump set, that is to say
the complement of the set of Lebesgue points, i.e.
the set of points x where dioeerent
namely
After choosing a normal n u pointing
toward the largest value of u, we recall the
following decompositions ([5] for more details):
where ru is the density of the absolutely continuous
part of Du with respect to the Lebesgue mea-
sure,
jSu is the Hausdoroe measure of dimension
restricted to the set S u and C u is the Cantor
part. We then recall the de-nition of a convex
function of measures. We refer to the works
of Gooeman-Serrin [34] and Demengel-Temam [25]
for more details. Let OE be convex and -nite on
R with linear growth at in-nity. Let OE 1 be the
asymptote (or recession) function :
then for u 2
using classical notations, we
de-ne Z
Z
Z
\Omega nSu
jC
Z
We -nally mention that this function is lower
semi-continuous for the BV \Gamma w?-topology.
4.2. Setting the problem
Let us recall the problem. Notice that the derivatives
will be now considered as distributional
derivatives. Consequently, the problem is to minimize
over
BV(\Omega\Gamma T+1 the functional E de-ned by
Z
Z\Omega
Z\Omega
c
Z\Omega
We recall that the regularization terms are interpretated
as convex functions of measures (see
(18)). The precise hypotheses on the functions
is an even and strictly convex
function, nondecreasing on R + and there
exist constants c ? 0 and b - 0 such that
As for the data (N h ) h=1::T , we will assume that :
and we will denote mN and MN the constants de-
-ned by :
where ess \Gamma inf (resp. ess \Gamma sup) is the essential
in-mum (resp. supremum).
4.3. Existence of a solution in a constrained
space
Let us consider the problem :
BV(\Omega\Gamma T+1 be a minimizing
sequence of E. Thanks to the property (20), one
may bound the derivatives of B and C h , and the
second term of E (see (19)) permits us to obtain
a bound for C h . However, nothing can be said
about the norm of B because of the product in
the -rst term of E (functions C h may be zero).
To overcome this diOEculty, let us introduce the
restricted space
E(\Omega\Gamma de-ned by :
Then, we have the following theorem :
Theorem 1. Given a sequence of images N h
verifying (22)-(23), the minimization problem :
where OE j verify (20)-(21), admits a solution in the
set
E(\Omega\Gamma .
Proof: The proof of this theorem is based on
classical arguments. As mentionned at the begining
of this section, the idea is to bound unifor-
maly a minimizing sequence, extract a converging
subsequence and pass to the limit. Notice that
working on this restricted space permits to obtain
a uniform bound for B. We refer to [10] for the
complete proof.
4.4. Existence and unicity of a solution over
The previous theorem establishes the existence
of a solution on a restricted space. However,
this result is not satisfying because working in a
constrained space is not easy to handle because
the optimality conditions are inequations and not
equations. In fact, even if these constraints are
natural (with regard to the interpretation of the
variables), we would like to avoid them. This is
the aim of Theorem 2 but we -rst need a preliminary
Lemma 1. Let u 2
verifying
hypotheses (20)-(21), and ' ff;fi the cut-ooe
function de-ned by :
ff if x - ff
Then we have :
Z
Z
\Omega OE(Du)
This Lemma is very intuitive, however we have
to deal with distributional derivatives and functions
of bounded variation. Consequently, we have
to deal with jump sets and Cantor parts. We refer
to the Appendix C where the complete proof
is sketched.
Using this Lemma, we can state the following
Theorem 2. Under hypotheses (20)-(21)and
(22)-(23), the minimization problem :
admits a solution in
where the constants mN ; MN are de-ned by (23),
then the solution is unique.
Proof: Existence is proved showing that the
minimization problem (26) over
E(\Omega\Gamma is equivalent
to the same problem posed over
that is to say without any constraint (this is a direct
consequence of Lemma 1). This remark will
permit us to prove the existence of a solution.
As for unicity, the diOEculty comes from the apparent
non convexity of the function :
with respect to all variables (Notice that it is convex
with respect to each variable). However, if ff C
is large enough, we prove that this functional is
convex over E which permits to conclude.
We refer to [10] for the complete proof.
This theorem is important since it permits
to consider the minimization problem over all
any constraint. On a numerical
point of view, this remark will be also important
since we will not have to handle with Lagrange
multipliers. We can also remark that the
condition (29) is in fact natural : it means that
the background must be suOEciently taken into account
5. The Minimization Algorithm
In the preceding section, we saw that there was a
unique solution in
BV(\Omega\Gamma T+1 of the minimization
problem (28). The aim of this section is to propose
a suitable algorithm to approximate numerically
this solution.
Before begining, we would like to insist on
the fact that working numerically with
BV(\Omega\Gamma is
something hard. Firstly, we cannot write Euler-Lagrange
equations. Anzellotti [7] proposes an extension
of Euler-Lagrange equation but they are
variational inequalities. In an image restoration
background, Vese [69] gives a characterisation of
the solution using a dual formulation. However,
both of them cannot be used, for the time being,
numerically.
Secondly, discretizing directly functions in
is still an opened question. For theses reasons, we
propose an algorithm with two steps :
- Section 5.1 : we de-ne a functional E ffl on
a more regular space. We show that the associated
minimization problem admits a unique solution
in W 1;2
that the functional E ffl \Gamma\Gammaconverges to E for the
refer to [32, 47] for more
details about the notion of \Gamma-convergence). Con-
sequently, converge for the
-strong topology to the unique solution of the
initial problem.
- Section 5.2 : For a -xed ffl, we are going to
construct a sequence (B
converging
to for the L 2 \Gammastrong topology.
It will be found as a minimizing sequence of an
extended functional. This part usually referenced
as the half quadratic minimization.
Consequently, we are able to construct a sequence
converging to the unique
minimum of the functional E for the L 2 \Gammastrong
topology. We will end this section by presenting in
section 5.3 the precise discretized algorithm. Its
stability will be proved using the -xed point theorem
5.1. A Quadratic Approximation
We -rst extend an idea developed in [21]. For a
function f having hypotheses (20)-(21), we de-ne
the odd function f ffl by :
We observe that for ffl ? 0, f ffl - f and for all t, we
f(t). Using this de-nition,
let us denote by OE 1;ffl and OE 2;ffl the two functions
associated to OE 1 and OE 2 . We then de-ne the function
Z\Omega
Z\Omega
Z\Omega
c
Z\Omega
Then, using same ideas than for Theorem 2,
we can prove that there exists a solution in
1;2(\Omega\Gamma T+1 of the problem :
where the constants mN ; MN are de-ned by (23),
then the solution is unique. We will denote by
the unique minimizer. We have
the following proposition :
Proposition 1. The sequence of functionals
\Gamma\Gammaconverges to the functional E for the
\Gammastrong topology as ffl goes to zero. The
sequence of the unique minimum of E ffl , noted
\Gammastrong to
the unique minimum of E.
Proof: By construction, the sequence E ffl is a
decreasing sequence converging pointwisely to the
functional e
de-ned by :
e
e
Thanks to [47] (proposition 5.7), we can deduce
that \Gamma-converges to the lower semi continuous
envelope of e
(for the L 2 T+1 \Gammastrong topology)
noted R(E). We then show that in fact
using some compacity results developed for instance
in [25, 15].
5.2. An extension using dual variables
be the unique minumum of
the functional E ffl over W
. For a -xed
ffl, our aim is to approximate it. To this end,
we need the result recalled in the Appendix A
and already used for the image restoration problem
(see Sect. 2.2) : let us apply Theorem 3 to
the functions OE 1;ffl and OE 2;ffl which ful-l desired hypotheses
(Typically OE i;ffl (
are concave). We will
denote by \Psi 1;ffl and \Psi 2;ffl the associated functions
\Psi. We then de-ne the functional E d
ffl de-ned over
(\Omega\Gamma3 \Theta W 1;2
Z
\Omega \Theta C 2
dx
Z
\Omega \Theta dBjrBj 2
dx
c
Z
\Omega \Theta
dx
where we have introduced the variables dB , dC1 ,.,
dCT associated to B, C 1 ,., CT respectively. To
minimize the functional E d
ffl , the idea is to minimize
successively with respect to each variable :
given the initial conditions (B
Ch ), we
iteratively solve the following system :
ffl (B; d n
d n+1
d n+1
Equalities (37)-(38) are written for
that the order of the minimization procedure
is not important for all the results presented be-
low. The way to obtain each variable like described
in (35) to (38) consists in solving the associated
Euler-Lagrange equations. As we will
see in section 5.3, the dual variables d n+1
(d n+1
are given explicitly, while B n+1 and
are solutions of linear systems. Any-
way, before going further, we need to know more
about the convergence of this algorithm : does it
converges and does (B
This is the purpose of the following
proposition
Proposition 2. Let (B
Ch ) be the
initial condition in W 1;2
. Then the sequence
de-ned by the system (35)-(36)-(37)-
(38) is convergent in L 2
\Gammastrong. More-
over, the sequence (B
\Gammastrong to the unique minimum of E ffl
in W
that is to say (B
Proof: The basis of the proof is to write
the variational optimality conditions associated to
each step and to pass to the limit into them. To
this end we needed some results about non-linear
elliptic equations [48, 49] and we used a trick of
Minty (see for instance [18, 21]). For more details,
we refer to [21, 9, 40] where such kind of ideas have
been developed.
5.3. The discretized algorithm
Let us write explicitely the equations that the
system (35)-(36)-(37)-(38) implies. Starting from
an initial estimate (B 0 ; d 0
Ch ), the equations
that will be solved are the following
b div(d n
d n+1
\Theta ff c
c div(d n
d n+1
As we said in the previous section, (40) and (42)
give explicitely the values of d n+1
B and d n+1
Ch while
are solutions of a linear system.
Once discretized using -nite dioeerences, the linear
system can be solved by a Gauss-Seidel method for
instance.
We next prove that the discretized algorithm
described by (39) to (42) is unconditionally stable.
Proposition 3.
Let\Omega d correspond to the dis-
cretization
of\Omega . Let E
be the space of discrete
functions (B;
where
c
Then, for a given (B
there exists a unique
(\Omega\Gamma such that (39)-
are satis-ed.
Remark that the bounds (43) and (44) can be
justi-ed if we consider the continuous case (see the
proof of the Theorem 2). As for condition (45), it
is also very natural if we admit the interpretations
of the variables C h : if this condition is false, this
would mean that the background is never seen at
some points which we refuse.
Proof: Let us sketch the proof. The -rst step is
to express the discretized equations (39) and (41).
Using equations (40) and (42) and the Appendix
B for the divergence terms (see the de-nition of
coeOEcients (p i+k;j+l
k;l C n+1
ff r
ff r
ff C
We then show that we have a contractive application
and conclude by applying the -xed point
theorem. We refer to [10] for the complete proof
which is mainly technical.
During this proof we needed to write explicitely
the discretized equations to be solved. We give
below a sum-up of the precise algorithm. Notice
that it is not necessary to compute explicitely the
dual variables because they are directly replaced
into the divergence operator.
1. /* Initializations (may be changed) */
2.
3. /* General loop */
4. for(It=0;It-ItNumber;It++) f
5. /* Minimizing in B */
corresponding to the divergence discretization for
Appendix
7. - Solve the linear system (47) by an iterative
method (Gauss-Seidel) to -nd B n+1
8. /* Minimizing in C h */
9.
corresponding to the divergence discretization for
(see
Appendix
11. - Solve the linear system (48) by an iterative
method (Gauss-Seidel) to -nd C n+1
12. g /* Loop on h */
13. g /* Loop on It */
To conclude this section, we will notice that if
ff r
0, the functions (C n+1
are in fact obtained
explicitly by :
As we can imagine, this case permits important
reduction of the computational cost since T linear
systems are replaced by T explicit expressions. We
will discuss in Sect. 6 if it is worth regularizing or
not the functions C h .
6. The Numerical Study
This section aims at showing quantitative and
qualitative results about this method. Synthetic
noisy sequences will be used to estimate rigorously
the capabilities of our approach. In all experi-
ments, we will -x the weights of dioeerents terms
to
and we will discuss
about the opportunity to choose a non zero coef-
-cient ff r
c . The purpose of Sect. 6.1 is the quality
of the restoration. The Sect. 6.2 is devoted to the
motion detection and its sensibility with respect
to noise. We will conclude in Sect. 6.3 by real
sequences.
6.1. About the Restoration
To estimate the quality of the restoration, we used
the noisy synthetic sequence presented in Fig. 4
(a)(b).
Figure
4 (c) is a representation of the
noisy background without the moving objects. We
mentioned the value of the Signal to Noise Ratio
(SNR) usually used in image restoration to quantify
the results quality. We refer to [43] for more
details. We recall that the higher the SNR is, the
best the quality is. Classically used to extract the
foreground from the background, the median (see
Fig. 4 (d)) appears to be ineOEcient. The average
in time of the sequence (see Fig 4 (e)), although
it permits a noise reduction, keeps the trace of the
moving objects. The Fig. 4 (f) is the result that
we obtained.
To conclude that section, let us mention that
we also tried the case ff r
that is to say we do
not regularized the functions C h . The resulting
SNR was 14, to be compared with 14.4 (ff r
c 6= 0).
This kind of results has been observed in all experiments
regularizing the functions C h does not
seem to inAEuence the quality of the restored back-
ground. Naturally if we are just interested to the
movement detection, this regularization may be
important. However, this point has to be better
investigated and more experimental results have
to be considered before to conclude.
6.2. The Sensitivity of Motion Detection With
Respect to Noise
In this section, we aim at showing the robustness
of our method with respect to noise. To this end,
we choose a synthetic sequence (see Fig. 5) where
a grey circle is translating from left to right in
front of a textured background.
To estimate the sensitivity of the algorithm, we
corrupted the sequence by gaussian noise of different
variance (from 5 to 50). We give in Fig. 1
the value of the SNR of the corrupted sequences
for each variance.
Figure
7 presents -ve typical results obtained
for dioeerent values of oe (oe=5,15,25,35,45). The
second one gives qualitative informations concerning
the quality of the restoration and the motion
detection. The criterion used to decide whether
a pixel belongs to the background or not is : if
then the pixel (i; j) of the image
number h belongs to the background. Other-
wise, it belongs to a moving object. The threshold
has been -xed to 0.25 in all experiments.
We can observe that when the SNR of the data
is more than 8 (corresponding to oe - 25), results
are particularly precise : The SNR of the background
is more than 20 (see Fig. 2) and the error
detections are less than 5 percent (see Fig. 3).
When the SNR of the data is less than 8, the motion
detection errors grow rapidly but the quality
of the restored background still remains correct.
See for instance the last row of Fig. 7 obtained for
the triangles on both sides are well recovered
(observe the strong noise in the sequence).
Finally, notice that same parameters (ff r
ff r
c ) have been used for all experiments. Generally
speaking, we remarked that the algorithm
performs well on a wide variety of sequences
with the same set of parameters (ff r
Fig. 1. Signal to Noise Ratio of the data as a function of the variance.
5 153025Fig. 2. SNR of the background as a function of the SNR of the data.
Fig. 3. dotted (resp. plain) line : percentage of bad detections for the moving regions (resp. static background) as a
(a) (b) (c) SNR=9.5
(d) SNR=5.7 (e) SNR=9.8 (f) SNR=14.4
Fig. 4. Results on a synthetic sequence (5 images) (a) Description of the sequence (-rst image) (b) Last image of the
sequence (c) The noisy background without any objects (d) Mediane (e) Average (f) Restored background (ff r
c 6=
Fig. 5. Three images of the initial synthetic sequence (35 images are available)
Fig. of the background as a function of the SNR of the data. Right : dotted (resp. plain) line : percentage
Fig. 7. Left : One image of the noisy sequence. Middle : The motion detection based on variable C h at the same
time. Right : The restored background B. From top to bottom : Results for dioeerent variances of the gaussian noise
(5,15,25,35,
6.3. Results on Real Sequences
Numerous real sequences have been tested using
this methodology. We will present some results
where the background of the scene is seen
most of the time. To be more precise, we mention
that some experiments have been done where
some people were hiding the background more
than sixty percent of the time. In that case, the
background found does not correspond to the real
static regions and takes into account some people
for the reconstruction. One possible way to avoid
this could be to add some a priori information of
the mouvement.
The -rst real sequence is presented in Fig. 8
(a)-(b). A small noise is introduced by the
camera and certainly by the hard weather condi-
tions. Notice the reAEections on the ground which
is frozen. We show in Fig. 8 (c) the average in
time of the sequence. The restored background is
shown in Fig. 8 (d). As we can see, it has been
very well found and enhanced. Figure 8 (e) is a
representation of the function C h (using a threshold
of 0.5) and we show in Fig 8 (f) the associated
dual variable dCh .
The second sequence is more noisy than the -rst
one. Its description is given in Fig. 9 (a). To
evaluate the quality of the restoration, we show a
close-up of the same region for one original image
(see Fig. 9 (b)), the average in time (see Fig. 9
(c)) and the restored background B (see Fig. 9
(d)). The detection of moving regions is displayed
in Fig. 9 (e). Notice that some sparse motion
have been detected at the right bottom and at
the left side of the two persons. They correspond
to the motion of a bush and the shadow of a tree
due to the wind.
The last sequence is taken from an highway (see
Fig. 10). We give two images (Fig. 10 (a) and
(b)) and the corresponding motion detection below
Finally, we show in
Fig. 10 (e) the restored background. Notice that
there is a black zone at the top of the road which
comes from the fact that there are always cars in
that region.
Notice that corresponding animations are available
in the Pierre Kornprobst's home page 4 .
7. Conclusion
We have presented in this article an original
coupled method for the problem of image sequence
restoration and motion segmentation. A theoretical
study in the space of bounded variations
showed us that the problem was well-posed. We
then proposed a convergent stable algorithm to
approximate the unique solution of the initial minimization
problem.
This original way to restore image sequence
has been proved to give very promising result.
A straightforward extension to color image sequences
has recently been developed. To complete
this work, several ideas are considered : use
the motion segmentation part to restore also the
moving regions, think about possible extensions
for non-static cameras. This is the object of our
current work.
Appendix
A
The Half Quadratic Minimization Theorem
This theorem has been inspired by Geman and
Reynolds [30] and proposed by Aubert [8].
Theorem 3. Let ' : [0; +1[! [0; +1[ be such
'(
is concave on ]0; +1[. (1)
Let L and M be de-ned as:
and
. Then, there exists a convex
and decreasing function
such that
where:
and
Moreover, for every -xed t - 0
the value d t for which the minimum is reached is
unique and given by:
In addition, we can give the expression of the
function \Psi with respect to OE. If we note
Fig. 8. Sweeden Sequence : (a) and (b) Description of the sequence (55 images available). Two people are walking from
top to bottom. This sequence is available from the web site http://www.ien.it/is/is.html. (c) The average over the
time. (d) The restored background B. (e) Function C h associated to the image (a) (a threshold of 0.5 has been used). (f)
The dual variable d C h
associated to the image (a).
Fig. 9. INRIA Sequence : (a) Description of the sequence (12 images available). (b) Zoom on a upper right part of
the original sequence (without objects). (c) Zoom on the mean image. (d) Zoom on the restored background B. (e) The
function C h thresholded. (f) The dual variable d C h .
Fig. 10. Highway Sequence : (a) and (b) Two images from the sequence (90 images available). (c) and (d) Corresponding
C h functions. (e) The restored background.
However, notice that this expression will never be
used explicitely.
Appendix
On Discretizing the Divergence Operator
Let d and A given at nodes (i; j). The problem
is to get an approximation of div(drA) at the
node (i; j). We denote by ffi x1 and ffi x2 the -nite
dioeerence operators de-ned by :
1Using that notation, Perona and Malik [58] proposed
the following approximation :
@
d
@A
@
d
@A
-@
(1)
where the symbol ? denotes the convolution and
S P is the sum of the four weights in the principal
directions. Notice that we need to estimate the
function d at intermediate nodes. Our aim is to
extend this approximation so that we could take
into account the values of A at the diagonal nodes
A ? A i;j (2)
where ff P and ff D are two weights to be discussed,
and S D is the sum of the four weights in the diagonal
directions. Approximation (2) is consistent if
and only if :
there remains one degree of freedom. Two
possibilities have been considered :
functions of d (See Fig. 10) (5)
a
Fig. B.1. ff
is the direction of the gradient of d. Notice that ff D can be
deduced from the consistency condition is then computed
thanks to the consistency condition.
Before going further, remark that any kind of
discretization leads to :
div
rA
and
To compare these dioeerent discretizations, we
made numerical experiments with the image
restoration problem where such kind of operator
have to be discretized. We recall that for a given
I , we need to -nd I n+1 such that :
I
I
We refer to section 2 for more details. The value
of d n
I
at intermediate nodes is computed
by interpolation (see [58]).
We tested these dioeerent discretizations on a
noisy test image using quantitative measures. We
checked that (2) permits to restore identically
edges in principal or diagonal directions. More-
over, we observed that choosing ff P adaptatively
gave more precise results than (4). We used
this approximation (5) in our experiments.
Appendix
Proof of Lemma 1
Proof: Let us -rst recall the Lebesgue decomposition
of the measure OE(Du) :
Z
Z
\Omega OE(jruj)dx
Z
Z
\Omega =Su
We are going to show that cutting the fonction
u using the fonction ' ff;fi permits to reduce each
term. To simplify notations, we will sometimes
use the notation -
u for the troncated function
fig
=\Omega =\Omega c . Thanks to [37], we have
Z
Z
OE(jruj)dx. Consequently :
Z
Z
Z\Omega
c
dx
Z\Omega
OE(jruj)dx (1)
2: using results proved in [5], we know
that :
Thanks to these results, and since ' ff;fi is Lipschitz
continuous with a constant equals to 1, we have :
Z
Z
Z
3: we need to understand how is the Cantor
part of the distributional derivative of the composed
function ' ff;fi (u). Vol'pert [70] -rst proposed
a chain rule formula for functions
for
BV(\Omega\Gamma and when ' is continuously dif-
ferentiable. Ambrosio and Dal Maso [6] gave extended
results for functions ' uniformely Lipschitz
continuous. Since u is scalar, it is demonstrated
in [6] that we can
on\Omega =S u
where ~
u is the approximate limit of u de-ned by :
lim
r \GammaN
Z
where B(x; r) is the closed ball with center x and
radius r. Moreover, we have :
Z\Omega
Z\Omega
Z
Notice that the second integral equals to zero because
the Hausdoroe dimension of the set S u =S - u
is at most N \Gamma 1 and we know that for any
BV(\Omega\Gamma and any set S of Hausdoroe dimension
at most N \Gamma 1, we have C v
using the chain rule formula (3), we have :
Z\Omega
Z\Omega
jC
Z\Omega
Finally, using results (1), (2), (5) permits to write
Z\Omega
Z\Omega
This concludes the proof.
Notes
1. http://www.ina.fr/INA/Recherche/Aurora/index.en.html
2. http://www.spd.eee.strath.ac.uk/users/harve/noblesse.html
3. http://www.esat.kuleuven.ac.be/ konijn/improofs.html
4. http://www.inria.fr/robotvis/personnel/pkornp/pkornp-eng.html
--R
Bayesian algorithms for adaptive change detection in image sequences using markov random
Analysis of bounded variation penalty methods for ill-posed p roblems
Image selective smoothing and edge detection by nonlinear dioeusion (ii).
Signal and image restoration using shock
A compactness theorem for a new class of functions of bounded variation.
A general chain rule for distributional derivatives.
The euler equation for functionals with linear growth.
Deterministic edge-preserving regularization in computed imaging
A mathematical study of the regularized optical AEow problem in the space BV(
A variational method and its mathematical study in image sequence analysis.
A variational method in image recovery.
Robust anisotropic dioeusion.
Visual Recon- struction
Color tv: Total variation methods for restoration of vector-valued images
Noise reduction of image sequences using adaptative motion compensated frame averaging.
Simultaneous recursive displacement estimation and restoration of noisy-blurred image sequences
Deterioration detection for digital
Image recovery via total variation minimization and related problems.
Nonlinear variational method for optical AEow computation.
Auxiliary variables and two-step iterative algorithms in computer vision problems
Image processing through reaction combined with nonlinear dioeusion.
Convex functions of a measure and applications.
"Traitement du Signal"
Noise reduction in image sequences using motion-compensated temporal -ltering
Measure Theory and Fine Properties of Functions.
Geometric Measure Theory.
Constrained restoration and the recovery of discontinuities.
McClure, and Donald Ge- man
un tipo di conver- genza variazionale
Minimal Surfaces and Functions of Bounded Variation.
Sublinear functions of measures and variational integrals.
Axiomatisation des analyses multi- #chelles d'images et de -lms
Moving object segmentation based on adaptive reference images.
An Introduction to Variational Inequalities and Their Applications.
Reconstruction of severely degraded image sequences.
A system for re-construction of missing data in image sequences using sampled 3d ar models and mrf motion priors
Image restoration via PDE's.
Image cou- pling
Nonlinear operators in image restoration.
Motion detection in spatio-temporal space
Image processing: Flows under min/max curvature and mean curvature.
An introduction to
Some results on regularity for solutions of non-linear elliptic systems and quasi-regular functions
Traitement num
Segmentation of images by variational methods: A constructive approach.
Image Sequence Restoration using Gibbs Distributions.
Optimal approximations by piecewise smooth functions and associated variational problems.
IEEE Computer Society Press.
Detection and localization of moving objects in image sequences.
Detecting multiple moving targets using deformable con- tours
Coupled Geometry-Driven Dioeusion Equations for Low-Level Vision
Total variation based image restoration with free local constraints.
Contrast enhancement via image evolution AEows.
Anisotropic dioeusion of multivalued images with applications to color
Experiments on geometric image enhancement.
Unique reconstruction of piecewise-smooth images by minimizing strictly convex non-quadratic functionals
A common framework for curve evolu- tion
Discontinuity preserving regularization of inverse visual problems.
Spatially and scale adaptive total variation based regularization and anisotropic dioeusion in image processing.
Solutions of Ill-posed Problems
The spaces BV and quasilinear equa- tions
Anisotropic Dioeusion in Image Process- ing
Anisotropic Dioeusion in Image Process- ing
Motion detection from image informa- tion
Analysis and Design of Anisotropic Dioeusion for Image Processing.
Weakly Dioeerentiable Functions.
--TR
Visual reconstruction
Motion detection in spatio-temporal space
Weakly differentiable functions
Scale-Space and Edge Detection Using Anisotropic Diffusion
Biased anisotropic diffusion
Image selective smoothing and edge detection by nonlinear diffusion. II
Constrained Restoration and the Recovery of Discontinuities
A nonlinear filter for film restoration and other problems in image processing
Signal and image restoration using shock filters and anisotropic diffusion
Image processing
A Variational Method in Image Recovery
Contrast enhancement via image evolution flows
A System for Reconstruction of Missing Data in Image Sequences Using Sampled 3D AR Models and MRF Motion Priors
Reconstruction of Severely Degraded Image Sequences
Deterioration detection for digital film restoration
Non-linear operators in image restoration
A PDE-Based Level-Set Approach for Detection and Tracking of Moving Objects
--CTR
Jong Bae Kim , Hang Joon Kim, GA-based image restoration by isophote constraint optimization, EURASIP Journal on Applied Signal Processing, v.2003 n.1, p.238-243, January
Jong Bae Kim , Hang Joon Kim, Region removal and restoration using a genetic algorithm with isophote constraint, Pattern Recognition Letters, v.24 n.9-10, p.1303-1316, 01 June
A. Ben Hamza , Hamid Krim , Josiane Zerubia, A nonlinear entropic variational model for image filtering, EURASIP Journal on Applied Signal Processing, v.2004 n.1, p.2408-2422, 1 January 2004
Etienne Mmin , Patrick Prez, Hierarchical Estimation and Segmentation of Dense Motion Fields, International Journal of Computer Vision, v.46 n.2, p.129-155, February 2002
Thomas Corpetti , tienne Mmin , Patrick Prez, Dense Estimation of Fluid Flows, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.3, p.365-380, March 2002
Stphanie Jehan-Besson , Michel Barlaud , Gilles Aubert, A 3-step algorithm using region-based active contours for video objects detection, EURASIP Journal on Applied Signal Processing, v.2002 n.1, p.572-581, January 2002
Daniel Cremers , Stefano Soatto, Motion Competition: A Variational Approach to Piecewise Parametric Motion Segmentation, International Journal of Computer Vision, v.62 n.3, p.249-265, May 2005 | discontinuity preserving regularization;space of functions of bounded variation;variational approaches;sequence image restoration;motion segmentation |
333323 | The Conditioning of Boundary Element Equations on Locally Refined Meshes and Preconditioning by Diagonal Scaling. | Consider a boundary integral operator on a bounded,d-dimensional surface in $\mbox{\smallBbb R}^{d+1}$. Suppose that the operator is a pseudodifferential operator of order $2m,$ $m\in\mbox{\smallBbb R},$ and that the associated bilinear form is symmetric and positive-definite. (The surface may be open or closed, and m may be positive or negative.) Let B denote the stiffness matrix arising from a Galerkin boundary element method with standard nodal basis functions. If local mesh refinement is used, then the partition may contain elements of very widely differing sizes, and consequently B may be very badly conditioned. In fact, if the elements are nondegenerate and 2|m|< d, then the $\ell_2$ condition number of B satisfies {\rm cond}({\mbox {\bf {\em B}}})\leq C N^{2|m|/d} \left(\frac{h_{\rm max}}{h_{\rm min}}\right)^{d-2m}, where hmax and h min are the sizes of the largest and smallest elements in the partition, and N is the number of degrees of freedom. However, if B is preconditioned using a simple diagonal scaling, then the condition number is reduced to ${\cal O}(N^{2|m|/d})$. That is, diagonal scaling restores the condition number of the linear system to the same order of growth as that for a uniform partition. The growth in the critical case 2|m|=d is worse by only a logarithmic factor. | Introduction
\Gamma be a bounded, d-dimensional, open or closed surface in R d+1 , where d - 1.
H s (\Gamma) denote the usual Sobolev spaces. Precise definitions and
assumptions are deferred until a later section. However, observe if \Gamma is a closed
surface then the spaces e
H s (\Gamma) and H s (\Gamma) coincide since \Gamma has no boundary. Let m
The authors were supported by the Australian Research Council.
y Mathematics Department, Leicester University, Leicester LE1 7RH, United Kingdom.
M.Ainsworth@mcs.le.ac.uk
z School of Mathematics, The University of New South Wales, Sydney 2052, Australia.
x Centre for Mathematics and its Applications, School of Mathematical Sciences, Australian
National University, Canberra, A.C.T. 0200, Australia. Thanh.Tran@maths.anu.edu.au
be a real number and suppose that B(\Delta; \Delta) : e
R is a symmetric,
bilinear form satisfying
e
(Throughout, the notation a - b will be used to indicate that a - Cb for some
positive constant C that is independent of the main quantities of interest, while
a ' b is equivalent to a - b and b - a.) Consider the problem of finding
R is a bounded, linear functional. The existence of a unique
solution u for each f follows immediately from the Riesz Representation Theo-
rem. Typically, (2) arises from the variational formulation of a boundary integral
equation associated with an elliptic boundary value problem. If the surface \Gamma is
smooth then the integral operator associated with the bilinear form B is a classical
pseudodifferential operator of order 2m on the manifold \Gamma.
The problem (2) will be approximated by first constructing a finite dimensional
subspace
m (\Gamma) on a partition P of the surface \Gamma, and then finding
This problem can be written as a system of linear equations by introducing a nodal
basis Ng. The precise details will be given later. The approximation uX
is then written in the form
with the coefficients fff determined by the equations
or, in matrix form,
where [B]
The assumptions on the bilinear form B(\Delta; \Delta) mean that the matrix B will be
symmetric and positive definite. The solution of the system (4) is typically accomplished
by the use of a direct solver such as Gaussian elimination (Cholesky
or sometimes by an iterative solver such as the conjugate gradient
method. The size of the condition number cond(B) of the matrix B is important
for the quality of the answers obtained by a direct method or the rate of convergence
of the iterative method. Either way, a large condition number points to possible
difficulties and it is important to understand and control the condition number.
The classical version of the boundary element method seeks to improve the
accuracy by uniformly subdividing all elements in the partition. While this process
does improve the approximation properties of the subspace X , the condition number
is also increased. Specifically, the growth of the condition number depends on the
the order 2m of the operator and the number of spatial dimensions d, as follows
where N is the number of degrees of freedom in the approximation space; cf. Hsiao
and Wendland [7, Remark 4], [8, Corollary 2.1].
If the solution u has singularities or other local features then it is more efficient to
refine the partition adaptively so that the space X is tailored towards approximating
the particular solution u of the specific problem in hand. Often, the final adaptively
refined partition contains elements of very widely differing sizes. This also has an
adverse effect on the growth of the condition number. For instance, we shall see
that if 2jmj ! d then
where hmax and h min are the sizes of the largest and smallest elements in the partition
The severe growth of the condition number due to the local refinements might
easily mean that the advantages accrued by adaptively refining the mesh are dissipated
by the cost of dealing with the solution of a highly ill-conditioned linear
system. The purpose of the current work is to address this issue. It will be shown
that if the matrix B is preconditioned or scaled using the matrix diag B, obtained
by taking the elements on the leading diagonal, then the extra growth factor
in (5) depending on the global mesh ratio is, provided essentially removed:
That is to say, diagonal scaling restores the condition number of the linear system
to the same order of growth as would be obtained if a uniform refinement scheme
were employed. The growth in the critical case marginally worse.
The current work finds much of its inspiration in the paper of Bank and Scott [2]
for the finite element approximation of problems associated with second order elliptic
partial differential equations in two and three dimensions. The current investigation
focuses on boundary element equations and as such allows for operators of
non-integer and possibly negative orders. This means that the associated Sobolev
spaces are also of non-integer and possibly negative orders, resulting in a number
of technical difficulties. Nevertheless, the conclusions are simple and applicable to
the practical solution of boundary element equations on highly refined meshes such
as those commonly arising from adaptive refinement procedures.
The remainder of the paper is organized as follows. The next section elaborates
on the construction of the boundary element approximation and on the conditioning
of the linear system, and concludes with a statement of the main result of
the paper: Theorem 1. Section 3 illustrates with numerical experiments how the
theory applies in practice to boundary element approximations of weakly singular
and hypersingular boundary integral equations posed on surfaces in R 2 and R 3 .
The Sobolev spaces are defined in Section 4, and we prove several technical results.
Theorem 9 in this section contains sharp estimates for the norms of standard nodal
basis functions in fractional order Sobolev spaces, and may be of independent interest
for the analysis of boundary element methods in general. Section 5 consists
of five lemmas, that together constitute the proof of our bounds on the growth of
the extreme eigenvalues of the stiffness matrix, with and without diagonal scaling,
and hence establish Theorem 1.
Partitions and Preconditioning
2.1 Galerkin Subspace
It will be assumed that the surface \Gamma is bounded and, for some fixed integer r - jmj,
is locally the graph of a C r\Gamma1;1 function over a C r\Gamma1;1 domain in R d . In particular,
this assumption is needed so that the Sobolev spaces H m (\Gamma) and e
later) are well-defined.
Let P be a partitioning of the boundary \Gamma into boundary elements K, as described
in [9, 11]. In particular, the non-empty intersection of a pair of distinct
elements K and K 0 is a single common vertex or edge. If \Gamma is a two dimensional
surface then K is typically a curvilinear triangle or quadrilateral. Each element
K is assumed to be the image of a common reference element b
K under a smooth
bijective mapping FK . Let
\Delta be a finite element in the sense of Ciarlet [5],
with the set b
consisting of polynomials defined on b
K, unisolvent with respect to
the finite set b
\Sigma of degrees of freedom. The degrees of freedom are identified with
point evaluations at distinct nodes fb x
Ig on b
K. The placement of the nodes
depends on the interelement continuity requirements of functions in the space X ,
and standard placements are well-established [5]. The local nodes in turn give rise
to a set of global nodes fx denotes a suitable indexing set
of size N . A set of global, nodal basis functions f' defined by the
normalization condition
and the requirement for the restriction of each function ' k (x) to an element K to
be of the form b
w(b x), where (b x) for some b
. The Galerkin subspace
is defined by and has dimension N . If
order to apply Lemma 6 during the proof of Theorem 9, we impose the condition
Z
where doe is the element of arc length surface area (d = 2) on \Gamma. Condition
rules out certain placements of the nodes: for instance, if
is affine, then the nodes f\Gamma1=
3g on the reference element b
would lead to a discontinous quadratic basis function with mean value zero.
Each partition P belongs to a family of partitions of \Gamma. The family is assumed to
be non-degenerate, so that the ratio of the diameter of an element to the diameter
of its largest inscribed ball is uniformly bounded over the whole family. It is also
assumed that the number of elements intersecting \Gamma k , the support of ' k , is also
uniformly bounded. Associated with each nodal basis function ' k is a parameter
defined to be the average of the diameters of the elements forming the support \Gamma k .
The non-degeneracy assumption implies, if d - 2, that the ratio of the diameters of
any pair of adjacent elements is uniformly bounded (that is, the partition is locally
quasi-uniform). In the one dimensional case, assumption is stipulated
separately. It is important to realize that locally quasi-uniform partitions may still
contain elements of greatly differing size. Indeed, if hmax and h min respectively
denote the diameters of the largest and smallest elements in the mesh, then the
global mesh ratio hmax=h min may be arbitrarily large. In particular, the assumptions
do not rule out families of partitions of the type generated by adaptive refinement
algorithms starting from an initial coarse mesh and creating a sequence of nested
partitions by selectively refining elements on the basis of some suitable criterion.
2.2 Conditioning of Stiffness Matrix
The Galerkin approximation entails the solution of a linear system of the form
One of our goals is to obtain bounds on the growth of the condition number of the
stiffness matrix B in terms of the number of degrees of freedom N and the mesh
quantities hmax and h min . The basic strategy is to determine positive quantities -
and , depending on the partition, such that
min Nh 2d
min N \Gamma2m=d h d\Gamma2m
Table
1: One-sided bounds on the extreme eigenvalues of the stiffness matrix B
constructed from the standard nodal basis functions.
This equivalence yields bounds on the actual minimum eigenvalue - (B) and maximum
eigenvalue (B), since
and
Consequently the ' 2 -condition number of the matrix may be bounded as
For the purposes of analysis, it is convenient to reformulate (7) in terms of functions
from the Galerkin subspace X , by defining an isomorphism R N 3 ff using
the rule
The assumptions on the bilinear form and properties of the basis functions imply
that
e
and
Therefore, the task of establishing (7) is equivalent to determining positive quantities
- and such that:
e
In particular, such an estimate also gives bounds on the behaviour of the largest and
smallest eigenvalues. A summary of the results obtained in Section 5, concerning
the behaviour of the eigenvalues, is given in Table 1. The results indicate that the
smallest eigenvalue decreases according to the size h min of the smallest element,
while the largest eigenvalue decreases according to the size hmax of the largest
element. Overall, this means that the ' 2 -condition number of the stiffness matrix
is dangerously sensitive to the global mesh ratio hmax=h min .
2.3 Preconditioning by Diagonal Scaling
One can attempt to control the growth of the condition number by means of a
preconditioner. Let D be the diagonal matrix formed from the entries on the leading
min )j
Table
2: One-sided bounds on the extreme eigenvalues of the diagonally scaled
stiffness matrix B 0 .
diagonal of B (i.e [D] It is simple to use D as a preconditioner
for B in an iterative solver since this only entails scaling the residual by D \Gamma1 at
each iteration. The effectiveness of the preconditioner depends on the condition
number of the diagonally scaled, or preconditioned, matrix
Of course, this matrix is never actually constructed in practice. The goal now, is
to obtain bounds on the growth of the condition number of the diagonally scaled
in terms of the number of degrees of freedom N and properties of the
partition. As before, the basic strategy is to determine positive quantities - 0 and
0 such that
or equivalently
These estimates may be reformulated by using the isomorphism given in (8), with
e
reducing the task to determining positive quantities - 0 and 0 such
e
e
e
Table
summarizes our results, proved in Section 5, concerning the growth or
decay of the extreme eigenvalues of the diagonally scaled stiffness matrix. It will
be observed that the effect of the preconditioner is to remove essentially the factors
involving the extreme mesh sizes hmax and h min from the bounds on the eigenvalues
of the condition number of the original stiffness matrix. In other words, a simple
diagonal scaling restores the growth of the condition number to the same order as
would be observed on a uniform mesh. The numerical results reported in Section 3
indicate that our one-sided bounds are usually, but not always (see Tables 7, 8 and
12), achieved in practice for realistic problems and meshes. It appears that the
bounds cannot be improved (except perhaps for some of the logarithmic factors)
unless one imposes additional restrictions on the mesh.
Theorem 1 Let cond(B) and cond(B 0 ) denote the ' 2 -condition numbers of the
stiffness matrix and the diagonally scaled stiffness matrix (10) respectively.
1. If j2mj ! d, then
and
2. If
min )j
and
min )j
3. If 2m = \Gammad, then for hmax sufficiently small,
and
min )j
Here hmax and h min are respectively the diameters of the largest and smallest elements
in the partition, and N is the number of degrees of freedom in the Galerkin
subspace X.
Proof. The results follow immediately from the above discussion, the norm equivalences
and (11), the inequality (25), and Lemmas 11, 12, 13 and 14; cf. Tables
1 and 2.
3 Numerical Examples
We illustrate the our general theory by considering some weakly singular
\Gamma1=2) and hypersingular integral equations on various boundaries
2.
These boundary integral equations were discretized on uniform and non-uniform
meshes using varying numbers of degrees of freedom N . The functions in the
Galerkin subspace X were piecewise-constant in the case of the weakly singular
equations, and piecewise-linear in the case of the hypersingular equations. We computed
the extreme eigenvalues - and , and also the condition number =- , of
the stiffness matrix B and of the diagonally scaled stiffness matrix B 0 . The numerical
values of these quantities, along with their apparent growth or decay exponents,
are given in the tables that follow, and compared with our theoretical bounds from
Tables
1 and 2, and from Theorem 1.
Three different boundaries \Gamma were considered:
Boundary 1: the L-shaped polygon in R 2 having vertices (0; 0), (0; 1),
Boundary 2: the open curve (\Gamma1; 1) \Theta f0g in R 2
Boundary 3: the screen (\Gamma1; 1) \Theta (\Gamma1; 1) \Theta f0g in R 3
3.1 Weakly Singular Equations
The weakly singular equations arise when boundary integral methods are used to
solve the Dirichlet problem for the Laplacian in domains in R 2 or R 3 with boundaries
defined as above. The integral equations take the forms shown below.
Boundary 1:2-
Z
doe
Boundary 2:2-
Z
doe
Boundary 3:4-
Z
u(y)
doe
If B(\Delta; \Delta) is the bilinear form associated with the operator on the left-hand side of
each of (12), (13), or (14), then it is shown in [6, 12] that
e
Hence, our assumption (1) is satisfied with \Gamma1=2. The factor 5 appearing in
(12) ensures that the bilinear form is positive definite; see [10].
3.2 Hypersingular Equations
First kind integral equations with hypersingular kernels arise when boundary integral
methods are used to solve the Neumann problem for the Laplacian in domains
in R 2 or R 3 . These equations take the forms shown below.
Boundary 1:
\Gamma2-
@
Z
u(y)
@
@- y
doe y +4
Z
u(y) doe
Boundary 2:
\Gamma2-
@
Z
u(y)
@
@- y
doe
Boundary 3:
\Gamma4-
@
Z
u(y) @
doe
If B(\Delta; \Delta) is the bilinear form associated with the operator on the left-hand side of
each of (15), (16), or (17), then it is shown in [6, 15] that
e
Once again, our assumption (1) is satisfied, now with +1=2. The second term
on the left-hand side of equation (15) ensures that the bilinear form is positive-definite
and not just positive-semidefinite; see [10].
3.3 Results with Boundary 1
For the weakly singular equation (12) on the L-shaped boundary, we have
so for the unscaled stiffness matrix,
min and
but after diagonal scaling,
Table
3 gives results for a quasi-uniform mesh (Figure 1), and Table 4 for a non-uniform
mesh (Figure 2) that is refined towards the re-entrant corner at (0; 0). In
the latter case, diagonal scaling leads to a very dramatic reduction in the condition
Turning to the hypersingular equation (15), we have
Indeed, the numerical results in Tables 5 and 6 show that, both for the quasi-uniform
and non-uniform meshes, diagonal scaling has little effect.
3.4 Results with Boundary 2
For the weakly singular equation (13) on the open curve, we again have
giving the bounds (18) and (19). For a uniform mesh, and for a graded
mesh with h observed the same growth
and decay rates as for the closed (L-shaped) curve, and do not tabulate the results
here. We also investigated a geometric mesh grading for which h
log h min ' \GammaN , we see from (18) and (19) that
In fact, the numerical results shown in Table 7 suggest that
our theoretical upper bounds are not attained for this mesh.
In the case of the hypersingular equation (16) on the open curve, our results
using a uniform mesh, or a graded mesh with h
gave the same growth and decay rates as on the closed curve, and again are not
reported here. Table 8 shows our results using the geometric mesh, for which (20)
gives
This time, the lower bounds are not attained, as we observe that - (B) and - (B 0 )
are decaying only very slowly, leading to very slow growth of cond(B) and cond(B 0 ).
3.5 Results with Boundary 3
For the weakly singular equation on the screen our theory
gives
Table
9 shows our numerical results for a uniform mesh (Figure 3), and Table 10
for a non-uniform mesh (Figure 4) that has been refined in a neighbourhood of the
corner (\Gamma1; 1).
Finally, in the case of the hypersingular equation (17) on the screen
Our results for a uniform mesh (Figure 3), given in Table 11, are as expected, but
those for a non-uniform mesh (Figure 4), given in Table 12, contain one surprise.
The minimum eigenvalue - (B) of the unscaled matrix appears to behave like N \Gamma1 ,
whereas our lower bound is - . As a consequence, cond(B) grows at the
same rate as cond(B 0 ). However, all of the results shown in Table 13, for a different
non-uniform mesh (Figure 5), achieve the one-sided bounds from our theory.
theory
theory
Table
3: Weakly singular integral equation on L-shaped boundary with uniform
Theoretical bounds (18) and (19).
theory
theory
Table
4: Weakly singular integral equation on L-shaped boundary with non-uniform
Theoretical bounds (18) and (19).
128
theory
128
theory
Table
5: Hypersingular integral equation on L-shaped boundary with quasi-uniform
Theoretical bounds (20).
128
theory
128
theory
Table
Hypersingular integral equation on L-shaped boundary with non-uniform
Theoretical bounds (20).
48 \Gamma0.3279e+2 1.02 0.2523e+0 0.00 0.4398e+14 32.60
theory
48
theory
Table
7: Weakly singular integral equation on open curve with geometric
Theoretical bounds (21).
theory
theory
Table
8: Hypersingular integral equation on open curve with geometric
Theoretical bounds (22).
theory
theory
Table
9: Weakly singular integral equation on a screen with uniform
Theoretical bounds (23).
528 0.17e\Gamma7 \Gamma3.13 0.38e\Gamma2 \Gamma1.04 0.23e+6 2.09
theory
528 0.33e+0 \Gamma0.01 0.24e+2 0.52 0.73e+2 0.53
theory
Table
10: Weakly singular integral equation on a screen with non-uniform
theory
9 0.68e+0 0.12e+1 0.18e+1
961
theory - N \Gamma1=2 - 1 - N 1=2
Table
11: Hypersingular integral equation on a screen with uniform
Theoretical bounds (24).
52
theory - N \Gamma3=2 - N \Gamma1=2 - N
966
142
548 0.4436e\Gamma2 \Gamma0:99 0.1803e+0 \Gamma0:20 0.4064e+2 0.78
theory
Figure
1: L-shaped boundary
Figure
2: L-shaped boundary with non-uniform
O(N refinement around re-entrant corner.
Figure
3: Uniform mesh on the screen.
Figure
4: Successive meshes in a sequence of non-uniform meshes on the screen,
with local refinement at one corner: h
Figure
5: Successive meshes in a different sequence of non-uniformly refined meshes
on the screen: h
4 Technical Preliminaries
4.1 Sobolev Spaces and Norms
For s 2 R, we define the Sobolev space H s (R d ) in the usual way, via the Fourier
transform (see the proof of Lemma 5). Given a Lipschitz
domain\Omega ' R d , we put
j\Omega for some U 2 H s (R d ) g
and
e
'\Omega
and equip these spaces with the norms
u=Uj\Omega kUk H s (R d ) and kuk e
These spaces satisfy the duality relations
\Gammas(\Omega\Gamma and e
with respect to the usual extension of the bilinear pairing
Z\Omega
Also, it is clear that
H s for
For s ? 0, an equivalent norm for H
s(\Omega\Gamma is
where the seminorm j
(\Omega\Gamma is defined, if
jffj=s
and if
The Sobolev spaces also have the interpolation properties
and e
for real s 0 Explicitly, we use
as the interpolation norm
Z 1jt \Gamma' K(t; u)j 2 dt
where the K-functional is defined, for
When the
domain\Omega is rescaled, different equivalent norms for H
s(\Omega\Gamma or e
might scale differently. We therefore fix a particular family of norms, denoted by
s(\Omega\Gamma , that will be used whenever we need estimates in which the
constants are independent of the domain(s) involved. These norms will be defined
only for jsj - r, and
when\Omega is bounded. Firstly, we set
and
r(\Omega\Gamma and jjjujjj 2
e
The latter is a norm by virtue of Poincar'e's inequality, because the functions
in e
on the boundary of \Omega\Gamma For we define the
norms by interpolation, i.e., if
and jjjujjj e
with the K-functionals using the jjj \Delta jjj-norms. Finally the negative-order norms are
defined by duality:
and jjjujjj e
The following inequalities are taken from the thesis of von Petersdorff [14]. A
modified proof is included here for the sake of completeness.
Theorem 2
;\Omega N be a partitioning of a bounded Lipschitz
domain\Omega into
non-overlapping Lipschitz domains. For \Gammar - s - r,
e
e
Proof. Introduce the product spaces
Y
s(\Omega
Y
e
s(\Omega
with norms given by
e
e
where product spaces X
the K-functional satisfies
so
and in particular
\Pi s for
Moreover,
with the duality pairing
Now consider the restriction and sum operators R and S, defined by
s(\Omega\Gamma and jjjSujjj e
\Pi s . Hence, by
interpolation,
s(\Omega\Gamma and jjjSujjj e
for In fact, these estimates hold also for \Gammar - s ! 0 by duality, because
Svi\Omega .
If jsj - r, then the Sobolev spaces of order s are invariant under C r\Gamma1;1 changes
of coordinates, allowing us to define H s (\Gamma) and e
H s (\Gamma) via a partition of unity
subordinate to an atlas of local coordinate patches. Thus, the Sobolev norms on \Gamma
are defined ultimately in terms of Sobolev norms on R d and, when \Gamma is an open
surface, on R d
.
In certain circumstances, Sobolev spaces may be continuously embedded in
type spaces and vice versa.
Theorem 3
1. If 0 - 2s ! d and
2. If
where the constant is independent of p.
3. If 0 - 2s ! d and
4. If
H \Gammas (\Gamma) -
r q
where the constant is independent of q.
Proof.
1. This follows immediately from the result (see [1]) H s (\Gamma) ,! L p (\Gamma) where
2. The following estimate (see, for example, [13, p.12])
Lp (R d
H d=2 (R d )
holds for any p - 2 with constant independent of p. The analogous result
holds for the domain \Gamma thanks to its construction in terms of local coordinate
patches in R d .
3. Applying H-older's inequality and the result in Part 1,
kwk Lp (\Gamma) kvk Lq (\Gamma)
and the result is proved.
4. The result follows from the estimate in Part 2, in a similar manner to the
proof of Part 3.
4.2 Scaling Properties of Norms
It will be useful to consider the behaviour of the Sobolev norms under a rescaling
of the bounded
domain\Omega\Gamma
2\Omega g: (28)
Then, for s 2 [0; r],
1. if u 2 e
e
e
2. if u
3. if u 2 H \Gammas
\Gammas(\Omega - d+2s jjjujjj 2
4. if u 2 e
H \Gammas
e
e
e
Proof. Since
L2 and @ ff u - \Gammajffj (@ ff u) - , Parts 1 and 2
hold for and hence by interpolation also for
3 and 4 then follow by duality, thanks to hu
vi\Omega .
Analogous results hold for Sobolev spaces defined on the whole of R d . For instance,
if (R d ) then
H s (R d )
or, if u 2 H \Gammas (R d ) then
H \Gammas (R d
H \Gammas (R d
H \Gammas (R d
Sharper estimates than these are possible with additional assumptions on u. The
next lemma gives an improved upper bound on the norm in H \Gammas .
(R d )
then
H \Gammas (R d
H \Gammas (R d
L1 (R d
where
- d+2s for
- 2d for d ! 2s ! 1.
Likewise, if u 2 L 1 (R d
H \Gammas (R d
e
H \Gammas (R d
e
H \Gammas (R d
(R d
Proof. Denote the Fourier transform of u by
Z
R d
e \Gammai2- \Deltax u(x) dx for - 2 R d ,
so that
H \Gammas (R d
Z
R d
u(-), the substitution
H \Gammas (R d
Z
R d
and
Z
Z
H \Gammas (R d
We have j-u(y)j - kuk L1 (R d ) for all y 2 R d , and thus
Z
L1 (R d )
Z
leading to the desired upper bound for ku - k 2
H \Gammas (R d )
, with
A - d+2s
Z
dy
Z 1=-t
Finally, if u 2 L 1 (R d
H \Gammas (R d
(R d ) denotes the
extension of u by zero, then
e
H \Gammas (R d
H \Gammas (R d
H \Gammas (R d
L1 (R d )
e
H \Gammas (R d
L1 (R d
We now consider lower bounds on the norm in H \Gammas .
such that if u 2 L 1 (R d (R d ) satisfies
Z
R d
Z
R d
then
H \Gammas (R d
Z
R d
where A - is given in (29). The same is true when R d is replaced by the half-space
R d
Proof. Let (R d ), and assume that (31) holds. By the mean
value theorem,
dt
Z
R d
so if we put -
Hence, from (30), if
H \Gammas (R d
Z
Z
jyj \Gamma2s dy;
and if
Z
Z 2-0
ae
- d\Gamma2s for 2s ? d,
implying the desired lower bound.
To obtain the estimate for the half-space R d
, we use the Seeley extension oper-
ator, setting
R, and where
so that 1
see [4, p. 64-66]. Since
Z
R d
Z
Z
R d
and
Z
R d
Z
Z
we see that if u satisfies
Z
Z
then Z
R d
Z
R d
fi fi . Simple manipulations show
so the series is alternating, with 2 \Gamma(k+1) j-
implying that f
(R d
(R d ) is bounded, and (Eu)
E(u - ), there is a -
H \Gammas (R d
H \Gammas (R d
H \Gammas (R d )
Z
R d
Z
R d
To improve the lower bound in the H s -norm, we observe that the seminorm
defined in (26) and (27) satisfies
for any open
set\Omega ' R d (bounded or unbounded).
the following is true.
(R d ) for (R d ),
and
(R d
(R d
(R d
4.3 Norms of Nodal Basis Functions
It is possible to obtain estimates for Sobolev norms of the nodal basis function ' k
in terms of the average size h k of the elements in the support \Gamma k of ' k .
Lemma 8 If ' k , k 2 N , is a nodal basis function, then
e
and
with constants independent of \Gamma k and p.
Proof. Follows immediately from Parts 1. and 3. of Lemma 4. The final result is
a simple computation.
Examining the proof of this result suggests one might also obtain useful estimates
by making use of Parts 2. and 4. of Lemma 4. However, the resulting estimates
would be sub-optimal as shown by:
Theorem 9 Let ' k , k 2 N , be a nodal basis function.
1. If 2s ? \Gammad, then
e
2. If sufficiently small,
e
3. If 2s ! \Gammad, then for h k sufficiently small,
e
In parts 2 and 3, we assume that the condition (6) holds.
Proof. If d, then we may apply part 1 of Theorem 3, followed by
Lemma 8, to obtain the lower bound
In the critical case d, we can use part 2 of Theorem 3, with
to obtain
Lp (\Gamma) 'p
but this estimate is not sharp. In fact, Lemma 7 shows that k' k k 2
k for
all s - 0. Theorem 2 and Lemma 8 give the sharp upper bound
e
e
e
and part 1, for s - 0, follows from (25).
then we can apply part 3 of Theorem 3, followed by Lemma 8,
to obtain the upper bound
e
In the critical case \Gammad, we instead use part 4 of Theorem 3, choosing q 2 (1; 2]
such that 1
to obtain
e
ph 2d
Lemma 5 provides an alternative proof of these two estimates, as well as showing
that
e
k for 2s ! \Gammad.
Theorem 2 and Lemma 8 yield the lower bound
but this estimate is not sharp when 2s - \Gammad. Indeed, we see from Lemma 6 that
for h k sufficiently small,
ae h 2d
The results for negative s follow at once from these bounds along with (25).
5 Proof of the Main Results
The earlier remarks indicate the key role played by decompositions of the form (8):
where v belongs to the Galerkin subspace X and v k is of the form ff k ' k (with
The assumptions on the partition P and the construction of the nodal basis
functions mean that the following properties are valid:
ffl Let K 2 P be any element, define
and let
Then M is uniformly bounded over the family of partitions.
ffl The indexing set N may be partitioned into disjoint subsets N
preserving the property
where L is uniformly bounded over the family of partitions.
For instance, if \Gamma is an open arc and the Galerkin subspace consists of continuous
piecewise polynomials of degree
be decomposed as in (32). Then, for
Proof. The left hand bound is proved by making use of the property (34) as follows
k2N l
and then by H-older's inequality and the fact that M in (33) is uniformly bounded,
Integrating over x 2 K and then summing over all K 2 P completes the proof.
Lemma 11 Let v 2 X be decomposed as in (32).
1. If 0 - 2m ! d then
and X
e
2. If
min )j
and X
e
min )j
Proof. Let k 2 N . By Lemma 8,
where the constants are independent of p, and by Theorem 9,
e
Hence, summing over the degrees of freedom leads to
and X
e
1. Suppose 2m ! d. Choosing
min
and X
e
using H-older's Inequality and Lemma 10,
Hence, with the aid of Theorem 3 part 1,
and X
e
2. Suppose d. Then (39) and (40) read
and X
e
For
Applying Theorem 3 part 2 and Lemma 10 gives
so that
Choosing
min )j) allows the first factor to be bounded as follows
min )j
and the proof is complete.
Lemma 12 Let v 2 X be decomposed as in (32). If 0 - 2m - d then
e
and
e
Proof. By the Cauchy-Schwarz inequality and Theorem 2,
e
k2N l
k2N l
k2N l
e
Theorem 9 imply that, if 0 - 2m - d, then
e
and
e
The results are obtained by summing over k 2 N .
be decomposed as in (32). If \Gammad - 2m - 0 then
Moreover,
e
Proof. Let k 2 N and choose K 2 P such that x k 2 K. By Lemma 4 part 3,
K is the reference element and - Summing over all k 2 N yields
min
K2P
for \Gammad - 2m - 0. The first estimate now follows, since, applying Theorem 2,
K2P
with the aid of Theorem 9 and (45),
e
e
again using Theorem 9 and (45),
e
e
Summing each of these estimates over k 2 N , and applying Theorem 2 as above,
completes the proof.
Lemma 14 Let v 2 X be decomposed as in (32).
1. If \Gammad ! 2m - 0 then
e
and
e
2. If
e
and
e
min )j
Proof.
1. Suppose \Gammad ! 2m - 0. Applying Theorem 3 part 3 and Lemma 10 gives
e
inequality,
and hence,
e
Applying Lemma 8 reveals that
and so, on the one hand,
and, on the other hand, by Theorem 9,
2. Suppose \Gammad. Then applying Theorem 3 part 4 and Lemma 10 gives, for
any q 2 (1; 2],
e
where the constants are independent of q. To obtain the first result, apply H-older's
inequality to obtain
and therefore,
e
choose
e
The second result is obtained using a variation of the same argument. By H-older's
inequality,
then the first factor is bounded as
and, using Theorem 9, the second factor may be bounded by
Therefore,
e
min )j) and choose
(Nh d
min )j
and the result follows immediately.
--R
On the conditioning of finite element equations with highly refined meshes.
Introduction to the Theory of Linear Partial Differential Equations
The Finite Element Method for Elliptic Problems.
Boundary integral equations for mixed boundary value problems in polygonal domains and Galerkin approximation.
A finite element method for some integral equations of the first kind.
The Aubin-Nitsche lemma for integral equations
The approximation of closed manifolds by triangulated manifolds and the triangulation of closed manifolds.
A preconditioning strategy for boundary element Galerkin methods.
Curved finite element methods for the solution of singular integral equations on surfaces in R 3
An augmented Galerkin procedure for the boundary integral method applied to two-dimensional screen and crack problems
Partial Differential Equations III
Randwertprobleme der Elastizit-atstheorie f?r Polyeder Sin- gularit-aten und Approximation mit Randelementmethoden
A hypersingular boundary integral method for two-dimensional screen and crack problems
--TR
--CTR
N. Heurer , M. E. Mellado , E. P. Stephan, hp-adaptive two-level methods for boundary integral equations on curves, Computing, v.67 n.4, p.305-334, December 2001
T. Tran , E. P. Stephan, Two-level additive Schwarz preconditioners for the h-p version of the Galerkin boundary element method for 2-d problems, Computing, v.67 n.1, p.57-82, July 2001 | diagonal scaling;condition numbers;boundary element method;preconditioning |
333369 | Modeling a Hardware Synthesis Methodology in Isabelle. | Formal Synthesis is a methodology developed at the university of Kent for combining circuit design and verification, where a circuit is constructed from a proof that it meets a given formal specification. We have reinterpreted this methodology in ISABELLES theory of higher-order logic so that circuits are incrementally built during proofs using higher-order resolution. Our interpretation simplifies and extends Formal Synthesis both conceptually and in implementation. It also supports integration of this development style with other proof-based synthesis methodologies and leads to techniques for developing new classes of circuits, e.g., recursive descriptions of parametric designs. | Introduction
Verification by formal proof is time intensive and this is a burden in bringing
formal methods into software and hardware design. One approach to reducing the
verification burden is to combine development and verification by using a calculus
where development steps either guarantee correctness or, since proof steps are in
parallel to design steps, allow early detection of design errors. We present such
an approach here, using resolution to synthesize circuit designs during proofs of
their correctness. Our approach is based on modeling a particular methodology
for hierarchical design development within a synthesis by resolution framework.
Our starting point is a novel methodology for hardware synthesis, called Formal
Synthesis, proposed by the Veritas group at Kent [7]. In Formal Synthesis,
one starts with a design goal, which specifies the behavioral properties of a circuit
to be constructed, and interactively refines the design using a small but powerful
set of techniques, which allows the developer to hierarchically decompose specifications
and introduce subdesigns and library components in a natural way.
Internally, each technique consists of a pair of functions: a subgoaling function
and a validation function. The former decomposes a specification Spec into sub-
specifications Spec i . The latter takes proofs that some Circ i achieve Spec i , and
constructs an implementation Circ with a proof that Circ achieves Spec. When
the refinement is finished, the system composes the validation functions that
were applied and this constructs a theorem that a (synthesized) circuit satisfies
the original design goal.
To make the above clearer, consider one of the simpler techniques, called
Split. The subgoaling function reduces a design goal Spec 1 - Spec 2 to two new
design goals, one for each conjunct. The validation function is based on the rule
which explains how to combine implementations that achieve the subgoals into
an implementation which achieves the original goal. This kind of top-down problem
decomposition is in the spirit of (LCF style) tactics and designing a good
set of techniques is analogous to designing an appropriate set of tactics for a
problem domain. However, tactics decompose a goal into subgoals from which
a validation proves the original goal, whereas with techniques, the validation
proves a different theorem altogether. Formal Synthesis separates design goals
and techniques from ordinary theorem-proving goals and tactics. This is a conceptual
separation: they are different sorts of entities with different semantics.
Moreover they are treated differently in the implementation too and Formal
Synthesis required extending the Veritas theorem prover itself.
We show how to reinterpret Formal Synthesis as deductive synthesis based on
higher-order resolution. To do this, we begin with the final theorem that the Formal
Synthesis validations should deliver: a circuit achieves a stated specification.
However the circuit is not given up front; instead it is named by a metavariable.
Proof proceeds by applying rules in Isabelle which correspond to Veritas
validation rules like (1). Because rule application uses resolution, the metavari-
able is incrementally instantiated to the synthesized circuit. Let us illustrate this
with the Split technique. Our goal (simplifying slightly) is to prove a theorem
like ?Circ!Spec, where the question mark means that Circ is a metavariable.
If Spec is a conjunction, Spec 1 - Spec 2 , then we can resolve ?Circ!Spec 1 - Spec 2
with the rule in (1). The result is a substitution, ?Circ =?Imp 1 -?Imp 2 , and the
subgoals ?Imp 1 !Spec 1 and ?Imp 2 !Spec 2 . Further refinement of these subgoals
will generate instances of the ?Imp i and hence ?Circ.
There are a number of advantages to this reinterpretation. It is conceptually
simple. We do away with the subgoal and validation functions and let resolution
construct circuits. It is simple to implement. No changes are required to the
Isabelle system; instead, we derive rules and program tactics in an appropriate
extension of Isabelle's theory of higher-order logic (HOL). Moreover, the reinterpretation
makes it easy to implement new techniques that are compatible with
the Formal Synthesis style of development. In the past, we have worked on rules
and tactics for the synthesis of logic programs [1] and these programs are similar
to circuits: both can be described as conjunctive combinations of primitive relations
where existential quantification is used to 'pass values'. By moving Formal
Synthesis into a similar theorem proving setting, we could adapt and apply many
of our tactics and rules for logic programming synthesis to circuit synthesis. For
example, rules for developing recursive programs yield new techniques for developing
parameterized classes of circuits. We also exhibit (cf. Section 6) that the
kind of rules used for resolution-based development in the Lambda system are
compatible with our reinterpretation.
Background
We assume familiarity with Isabelle [10] and higher-order logic. Space limitations
restrict us to reviewing only notation and a few essentials.
Isabelle is an interactive tactic-based theorem prover. Logics are encoded
in Isabelle's metalogic by declaring a theory, which is a signature and set of
axioms. For example, propositional logic might have a type F orm of formulae,
with constructors like - and a typical proof rule would be !!
(B =) A - B). Here =) and !! are implication and universal quantification
in Isabelle's metalogic. Outermost (meta-)quantifiers are often omitted and
iterated implication is written in a more readable list notation, e.g.,
A - B. These implicitly quantified variables are treated as metavariables, which
can be instantiated when applying the rule using higher-order unification. In our
work we use Isabelle's theory of higher-order logic, extended with theories of
sets, well founded recursion, natural numbers, and the like.
Isabelle supports proof construction by higher-order resolution. Given a proof
state with subgoal / and a proof rule we unify OE with /. If this
succeeds, then unification yields a substitution oe and the proof state is updated
by applying oe to it and replacing / with the subgoals oe(OE
unification is used to apply rules, the proof state itself may contain metavariables.
We use this to synthesize circuits during proofs. Note that a proof rule can
be read as an intuitionistic sequent where the OE i are the hypotheses. Isabelle's
resolution tactics apply rules in a way that maintains this illusion of working
with sequents and we will often refer to the OE i as assumptions.
Veritas [8] is a tactic based theorem prover similar to Isabelle and HOL,
but its higher-order logic is augmented with constructions from type theory,
e.g., standard type constructors such as (dependent) function space, product,
and subtype. When used to reason about hardware, one proves theorems that
relate circuits, Circ, to specifications, Spec, e.g.,
are the external port names. As is common, hardware is represented
relationally, where primitive constructors (e.g., transistors, gates, etc.)
are relations combined with conjunction and 'wired together' using existential
quantification [3]. The variables in p1 are typed, and the primary difference
between Veritas and similar systems is that one can use richer types in such
specifications. For example, if we are defining circuits operating over 8-bit words,
we would formalize this requirement on word-length in the types.
3 Formal Synthesis
Formal Synthesis is based on techniques. As previously indicated, each technique
combines a subgoaling function with a validation. The validations are executed
when a derivation is completed to build a proof that a circuit achieves an im-
plementation. Before giving the techniques we introduce some notation used
by Formal Synthesis. A design specification spec which is to be implemented is
written 2spec (2 is just a symbol; there is no relationship to modal logics). A
formula Thm to be proved is written as ' Thm. A term Tm to be demonstrated
well typed is written as 3Tm. A proof in Veritas is associated with a support
Name Subgoaling rule Validation rule
'Imp!Spec
Reveal 29dec: Spec
'(9dec: Imp)!(9dec:Spec)
Inside 2Circ b= -dec: Spec
ports
Library 2library part args
'library part args!library part args
Subdesign 2Spec
'(let c b
Design 2subdesign args
'subdesign args!subdesign args
Proof 'Thm \PiTm
'Thm 'Tm
Table
1. Veritas design techniques
signature, which contains the theory used (e.g., datatype definitions), definitions
of predicates, and the like. In Veritas, the signature can be extended dynamically
during proof. If a technique extends the signature of a goal Spec, then this
is written as [[extension]]Spec. Finally, the initial design goal must be of the form
Circ is the name assigned to the circuit.
There are eight techniques and these are adequate to develop any combinational
circuit in a structured hierarchical way. The subgoaling and validation
functions are both based on rules. The subgoaling rules should be read top-
down: to implement the goal above the line, solve all subgoals below the line.
The validation rules operate in the opposite direction: the goal below the line is
established from proofs of the goals above the line. These rules behave as follows:
Claim: The subgoaling function yields a new design goal Spec 0 and a subgoal
that this suffices for Spec. The validation is based on the transitivity of !.
Split: The subgoaling function decomposes a conjunctive specification into the
problem of implementing each conjunct. The validation constructs a design from
designs for the two subparts.
Reveal: Shifts internally quantified ports into the signature, allowing further
refinement.
Inside: Since the initial design goal is given as a lambda abstraction, a technique
is needed to remove this binding. To implement the circuit Circ means to
implement the specification Spec. The validation theorem states that the implementation
is correct for all port-values that can appear at the ports port , which
are declared in the declaration dec.
Library: The subgoaling function imports components from a database of pre-defined
circuits. The validation corresponds to the use of a lemma.
Subdesign: A subdesign, submitted as a lambda abstraction, may be intro-
duced. It must be implemented (second subgoal) and may be used in proving
the original design goal (first subgoal).
Design: Like the Library technique, this enables the user to apply a design that
has already been implemented. In this case, the implementation is a subdesign
introduced during the design process.
Proof: Veritas is used to prove a theorem.
4 Implementation and Extension
We now describe our reinterpretation of Formal Synthesis in Isabelle. We divide
our presentation in several parts, which roughly correspond to Isabelle
theories that we implemented: circuit abstractions and 'concrete technology',
technique reinterpretation, and new techniques.
4.1 Circuit Abstractions and Concrete Technology
We represent circuits as relations over port-values. We model kinds of signals as
sets and we use Isabelle's set theory (in HOL) to imitate the dependent types
of the Veritas logic; hence quantifiers used in our encoding are the bound
quantifiers of Isabelle's set theory. We name the connectives that serve as
constructors for circuits with the following definitions.
We will say more about the types P and W shortly. After definition, we derive
rules which characterize these operators and use these rules in subsequent proofs
(rather than expanding definitions). For example, using properties of conjunction
and implication we derive (hence we are guaranteed of their correctness) in
Isabelle the following rules which relate Join to Sat.
Associating definitions with such characterization theorems increases comprehensibility
and provides some abstraction: if we change definitions we can reuse
our theories provided the characterization theorems are still derivable. Definitions
have a second function: they distinguish circuit constructors from propositional
connectives and this restricts resolution (e.g., Join does not unify with
-) and makes it easier to write tactics that automate design.
Using the above abstractions, we can express the correctness of a (to be
synthesized) circuit with respect to a specification as
This is not quite adequate though to simulate Formal Synthesis style proofs.
A small problem is that the application of the techniques Reveal, Inside, and
Subdesign extend the Veritas signature. As there is no direct analog of dynamic
signature extension in Isabelle, we model Reveal and Inside using bounded
quantification and achieve an effect similar to signature extension by adding
declaration information to the assumptions of the proof state according to the
rules of set theory. Subdesign, which in Veritas extends the signature with a
new definition is slightly trickier to model; we regard the new definition as a
condition necessary for the validity of the correctness theorem and we express
this condition using ordinary implication. Of course, when stating the initial
goal we do not yet know which definitions will be made, so again we use a
metavariable to leave this as an unknown. Thus a theorem that we prove takes
the initial form
Analogous to the Formal Synthesis design goals we call such a theorem a design
theorem. The constant is simply the identity function and Def(?Definitions)
serves as a kind of 'definition context'. Each time a definition is made, ?Definitions
is instantiated to a conjunction of the definition and a fresh metavariable, which
can be instantiated further with definitions made later.
In the design theorem, wires are bound to types these types are defined
in theories about 'concrete technologies', e.g., representations of voltage, signals,
and the like. A simple instance is where port values range over a datatype bin
where voltage takes either the value Lo or Hi. We use Isabelle's HOL set theory
to define a set bin containing these two elements. Afterwards, we derive standard
rules for bin, e.g., case-analysis over binary values.
We extend our theory with tactics that automate most common kinds of
reasoning about binary values. For example, we have tactics that perform exhaustive
analysis on Port values quantified over bin and tactics that combine
such case analysis with Isabelle's simplifier and heuristic based proof procedure
(fast tac) for HOL. These tactics automate almost all standard kinds of
reasoning about combinational circuits.
We extend our voltage theory with a theory of gates that declares the types
of gates and axiomatizes their behavior. For example, the and gate is a predicate
of type [bin; bin; bin] ) bool whose behaviour is axiomatized as
Such axioms form part of a library of circuit specifications and are used to
synthesize parts of circuits.
4.2 Reinterpreting Techniques
We have implemented tactics that simulate the first seven techniques (for
the Proof technique we simply call the prover). Table 2 lists the tactics and the
derived rules corresponding to the Veritas techniques. The tactics are based
Reveal: reveal tac i
Inside: inside tac i
Library: library tac dels elims thm i
Subdesign: subdesign tac str i
Design: design tac dels elims i
Table
2. Isabelle Techniques (Name, Tactic, and Rule)
on the derived rules, which correspond to the validation rule associated with the
Veritas technique; their function is mostly self explanatory.
Claim and Split: The tactics apply rules that are direct translations of the
corresponding Veritas validation rules. The specification Spec 0 is supplied as a
string str to claim tac.
Reveal and Inside: These are identical to the Veritas techniques except that
internal wiring, or quantification over ports, is converted to quantification at
the metalevel and type constraints (expressed in Isabelle's HOL set-theory)
become assumptions. The rules state that if an implementation satisfies its spec-
ification, then we can wire a signal to a port with Inside or we can hide it as
an internal wiring with Reveal. reveal tac and inside tac apply their respective
rule as many times as possible to the specified goal.
Library: library tac solves the design goal R by using a previously implemented
design, supplied by the user. The rule is a form of implication elimination. The
first subgoal is instantiated with the component's design theorem. The second
is solved by extending the definition context of the overall design. The third
establishes the correctness of the design using the specification of the library
component. This involves type checking for the ports, which depends on the
concrete technology of the designs; hence we supply the tactic with additional
'elimination' and `deletion' rules, which solve the type checking goals.
Subdesign: We have given an informal rule schema (the others are formally
derived) that represents infinitely many rules (there are arbitrarily many quan-
tifiers, indicated by the ellipses). subdesign tac simulates the effect of applying
such a rule. The user gives the design goal of the subdesign to be implemented
in the same form as the initial goal. Three new subgoals are generated by the
tactic. The first corresponds to the subdesign definition and the tactic discharges
this by adding it to the definition context of the main design. The second sub-goal
commits us to implement the subdesign. The third allows us to use the
subdesign when proving the original subgoal.
Design: This solves a design goal by using a previously introduced subdesign.
The subdesign has been given as a design goal, which is part of the assumptions
of the goal to be solved. The tactic removes all port quantifiers for the assumption
by repeatedly applying the associated rule. For each port a new subgoal is
generated, which concerns the type of that port, and is solved as in the Library
technique; hence we provide lists of type checking rules to the tactic.
Proof: General proof goals arise from the application of the Claim rule. These
are undecidable in general and must be proven by the user.
4.3 Extensions of the Calculus
The techniques defined by Formal Synthesis are effective for developing combinational
circuits. However, nontrivial circuits are often best developed as instances
of parametric designs. For example, rather than designing a 16-bit adder it is
preferable to develop one parameterized by word-length and afterwards to compute
particular instances. We have developed new techniques that are compatible
with our reinterpretation of Formal Synthesis and construct such parameterized
circuits. Structural induction is used to build parameterized linear circuits and
more generally n-dimensional grids, and course-of-values induction is used to
build general recursively defined designs. We will consider course-of-values induction
below and later apply it to build a tree-structured addition circuit.
The idea for such extensions is motivated by previous work of ours on calculi
in Isabelle for synthesizing recursive logic programs [1]. There we developed
rules and tactics based on induction which extend definition contexts with templates
(a function or predicate name with a metavariable standing in for the
body) for recursive definitions and leave the user with a goal to prove where
use of the induction hypothesis builds a recursive program. This past work has
much in common with our technique-based calculus for circuits. Syntactically,
logic programs and circuits are similar: both can be described as conjunctive
combinations of primitive relations where existential quantification 'passes val-
ues'. It turns out that we could (with very minor adaptation) directly use the
rules and tactics we developed for synthesizing logic programs to build recursive
circuits. We find this kind of reuse, not just of concepts but also of actual rules
and tactics, an attractive advantage of interpreting different synthesis methodologies
in a common framework. We will address this point again in Section 6
where we consider how techniques developed for the Lambda system can also
be applied in the Formal Synthesis setting.
We construct a parameterized circuit by proving a parameterized design the-
orem, which is a design theorem where the outermost quantifier (or quantifiers)
ranges over an inductively defined datatype like the natural numbers, e.g.,
specifies an implementation whose size depends on the number n. We use induction
to instantiate Circ to a recursively specified design. Isabelle's HOL comes
with a theory of natural numbers (given as an inductive definition) from which
it is easy to derive the following course-of-value induction rule.
We use this rule as the basis for a tactic, cov induct tac, which functions as if
the following rule schema were applied.
When this rule is applied by higher-order resolution, Spec will unify with
the specification of the design theorem, and Circ with the metavariable standing
for the circuit. The first subgoal sets up a parameterized circuit definition:
an equality for Circ is defined, where it is equal to Definition , which will be a
metavariable. Our tactic discharges this subgoal by adding it to the definitions
of the main design. This leaves us with the second goal, which is the design
goal. However, now we build a circuit named Definition and instantiating this
in subsequent proof steps will instantiate our definition for Circ. Moreover, we
now have a new assumption (the induction hypothesis),which states that Circ
achieves Spec for smaller values of k. If we can reduce the problem to implementing
a smaller design, then we can resolve with the induction hypothesis and this
will build a recursive design by instantiating the definition body of Circ with an
instance of Circ. How this works in practice will become clearer in Section 5.2.
Parameterized specifications require parameterized input types, e.g., rather
than having input ports, we have parameterized input busses. To support this we
develop a theory of busses encoded as lists of specified lengths (there are other
possibilities, but this allows us to directly use Isabelle's list theory). Some of
our definitions are as follows.
A bus (B bus (n)) is a list of length n and whose members are from the set
B. The functions upper and lower return the upper and lower n bits (when they
exist) from a bus b and l'n returns the nth element of a bus. We have proven
many standard facts about these definitions, e.g., we can decompose busses.
5 Examples
We present two examples. Our first example is a comparator developed using
Formal Synthesis in [7]. It illustrates how we can directly mimic Formal Synthesis
to create hierarchical designs. Our second example uses induction to construct
a parameterized tree-shaped adder.
5.1 Comparator Cell
A comparator takes as input two words A and B, representing numerals, and
determines their relative order, i.e., which of the three cases A ! B,
holds. Such a circuit can be built in a ripple-carry fashion from comparator
cells. These cells (left-hand figure in Figure 1) compare a bit from A and B and
also have as input three bits (one for each case) which are the result of previous
comparisons (grin , eqin , and lsin), and as output three bits (grout , eqout , lsout).
The behavioral specification is:
CompCellS(a, b, grin, eqin, lsin, grout, eqout, lsout) ==
The function vl is defined in our theory of binary values:
We can now submit the following design theorem to Isabelle.
(Port a:bin b:bin grin:bin eqin:bin lsin:bin grout:bin eqout:bin lsout:bin.
CompCellS(a, b, grin, eqin, lsin, grout, eqout, lsout))
We apply an initialization tactic which sets up the design goal by implication
introduction (which moves the definition context into the assumption list) and
applies the Inside tactic; this yields the following proof state (we elide some
information about the typing of the ports).
CompCellS(a, b, grin, eqin, lsin, grout, eqout, lsout))
1. !! a b grin eqin lsin grout eqout lsout.
grin
eqin
lsin
grin
eqin
lsin
a b
a b
BitComp
CompCell
lsout
eqout
grout
grout
eqout
lsout
Fig. 1. Comp-cell and its claimed implementation
The original goal is given on the first-three lines; it contains two metavariables
?Circ and ?H standing for the implementation of CompCellS and the definitions
that will be made, respectively. These are also present in the following lines
(which contain the subgoal that must be established to prove the original goal)
and will be instantiated when proving this subgoal. Next we introduce a new
subdesign BitComp that we will use as a component. The idea is that we first
compare the two bits a and b representing the current digit. Then we combine
the result of this comparison with information coming from comparisons of less
significant bits to give the result. The specification of the BitComp subdesign is
the following.
We apply subdesign tac and this yields two subgoals. At the top, we see our
original goal where the definition context ?H was extended with a definition for
BitComp and there is a new metavariable ?G for further definitions. The first
subgoal is a design theorem for the subdesign. The second is the original design
theorem but now with an additional assumption that there is an implementation
of the subdesign which satisfies the specification BitCompS.
(Port a:bin b:bin gr:bin eq:bin ls:bin.
CompCellS(a, b, grin, eqin, lsin, grout, eqout, lsout)
1.
2. !! a b grin eqin lsin grout eqout lsout.
BitLess BitLess
BitComp
BitLess
ls eq gr
a
z
x y
Fig. 2. Claimed implementations of bit-comp and bit-less
Given this subdesign we use claim tac to state that the following specification
entails the original goal (Figure 1).
EX gr:bin eq:bin ls:bin x:bin y:bin.
andS(eq, eqin, eqout) & andS(eq, lsin, ls) & orS(ls, y, lsout)
Due to space limitations we will just sketch the remaining proof. First we
show that the claimed specification entails the original one; we prove this automatically
using a tactic that performs case-analysis and simplification. After,
we implement it by using reveal tac to strip existential quantifications (and
introduce internal wires) and then use split tac to break up the conjunctions
(and Join together subcircuits). The components are each implemented either
by introducing simpler subdesigns and implementing those (see below), or using
library tac, which accesses appropriate library parts, and using design tac, to
apply developed subdesigns.
Let us sketch one of these remaining tasks: implementing the subdesign
BitCompS. We proceed in the same manner as earlier and we introduce a new
subdesign BitLess ; then we build the above BitComp using BitLess twice as shown
in
Figure
2. Finally, BitLessS is so simple that we can claim a direct implementation
consisting of components from the library. After these steps, the design
theorem that we have proved is the following.
(Port x:bin y:bin z:bin. ?BitLess(x, y, y, z))) -?
and(eq, eqin, eqout) Join and(eq, lsin, ls) Join or(ls, y, lsout))
Sat CompCellS(a, b, grin, eqin, lsin, grout, eqout, lsout)
The metavariable ?H has become instantiated with the conjunction of the
definitions of the subdesigns used in the implementation of the main design goal,
i.e., BitComp and BitLess . In the main goal, the unknown ?Circ has become a
predicate that represents the structure of the desired circuit and is built from
these subdesigns and additional and and or gates. Overall, our proof builds the
same circuit and uses the identical sequence of technique applications as that
presented in [7]. The difference is not so much in the techniques applied, but
rather the underlying conceptualization and implementation: in the Veritas
system, the implementation is constructed at the end of the proof by the validation
functions, whereas in our setting, the design takes shape incrementally
during the proof. We find this advantageous since we can directly see the effects
of each design decision taken.
5.2 Carry Lookahead Adder
Our second example illustrates how our well-founded induction technique synthesizes
parameterized designs: We synthesize a carry lookahead adder (henceforth,
cla-adder) that is parametric in its bit-width n. For n-bit numbers, such an adder
has a height that is proportional to log(n) and thus computes the sum s and a
carry co from two numbers a, b and an incoming carry c i in O(log(n)) time. Instead
of propagating the carry from digit to digit as it is done in a ripple-carry
adder, we compute more detailed information (c.f. [9]). A generate bit g indicates
when a carry is generated by adding the digits of a and b and a propagate bit
indicates if an incoming carry is handed through. From this information we
obtain the carry bit co for the adder in the following way: it is Hi if Hi or
if both carry lookahead adder is implemented, roughly
speaking, by recursively decomposing it in two adders, each half the size of the
original. The propagate and generate bits for the overall adder are obtained by
combining the corresponding bits of the subparts with the incoming carry c i . In
the case of adding single digits of a and b (the base case of the recursion) the
propagate bit corresponds to the logical or and the generate bit corresponds to
the logical and of the digits.
The adder we synthesize is built from two components (Figure 3). The first,
cla, computes the sum s, the propagate bit p and the generate bit g from the
numbers a and b and the incoming carry c i . The second is an auxiliary component
aux , which is used to combine the propagate bit, the generate bit and the
incoming carry to the outgoing carry co . This component consists of two gates
and can be derived in a few steps. We focus here on the development of the more
interesting component cla . We can specify its behavior using data abstraction
by an arithmetic expression.
claS(n,a,b,s,p,g,ci) ==
case g of
Hi =?
Note that numbers are represented by busses (i.e. bit vectors) and the value of
a bus as a natural number is given by val. We assume busses to have a nonzero
length. Hence in the following design theorem, we restrict induction to the set
nnat(1) of natural numbers greater than zero.
bus n) b:(bin bus n) s:(bin bus n) p:bin g:bin ci:bin.
n)
cla(
Carry lookahead adder
aux
s a b
ci
aux
co
x
Fig. 3. Implementation of a cla-adder from two components
As before, we begin by shifting the definition environment Def(?H) to the as-
sumptions. After, we apply the course-of-values induction tactic which yields the
following proof state
bus n) b:(bin bus n) s:(bin bus n) p:bin g:bin ci:bin.
?cla(n, a, b, s, p, g,
(ALL n:nnat(1). Port a:(bin bus n) b:(bin bus n) s:(bin bus n) p:bin g:bin ci:bin .
?cla(n, a, b, s, p, g, ci) Sat claS(n, a, b, s, p, g, ci))
1. !!n. [-
(Port a:(bin bus
?cla(k, a, b, s, p, g, ci) Sat claS(k, a, b, s, p, g, ci)) -] ==?
a:(bin bus n) b:(bin bus n) s:(bin bus n) p:bin g:bin ci:bin .
?D16(n, a, b, s, p, g, ci) Sat claS(n, a, b, s, p, g, ci)
As previously described, ?H is extended with a definition template and further
definitions are collected in ?Q2 . The metavariable at the left-hand side of
the definition serves as the name for the design being defined. The metavariable
on the right-hand side will be instantiated with the implementation when we
prove subgoal 1. The induction hypothesis is added to the assumptions of that
subgoal: We may assume that we have an implementation for all k less than n
and that we can use this to build a circuit of size n.
We proceed by performing a case analysis on n. We type by(if-tac "n=1" 1),
which resolves subgoal 1 with the rule
where P is instantiated with
bus n) b:(bin bus n) s:(bin bus n) p:bin g:bin ci:bin .
?cla(n, a, b, s, p, g,
cla-base
s a b
cla(
c
c
aux
aux
carry
carry
c
lower( upper(
lower(
lower( upper(
l, a
l, s
l, b
u, a )
u, b
u, s
cla(
Fig. 4. Base case (left) and recursive decomposition (middle/right) where
and
(ALL n:nnat(1). Port a:(bin bus n) b:(bin bus n) s:(bin bus n) p:bin g:bin ci:bin .
?cla(n, a, b, s, p, g, ci) Sat claS(n, a, b, s, p, g, ci))
1. !!n a b s p g ci.
[-
(Port a:(bin bus
cla(k, a, b, s, p, g, ci) Sat claS(k, a, b, s, p, g, ci));
a:bin bus n; b:bin bus n; s:bin bus n; p:bin; g:bin; ci:bin;
?C23(n, a, b, s, p, g, ci) Sat claS(n, a, b, s, p, g, ci)
2. !!n a b s p g ci.
[-
(Port a:(bin bus
cla(k, a, b, s, p, g, c) Sat claS(k, a, b, s, p, g, ci));
a:bin bus n; b:bin bus n; s:bin bus n; p:bin; g:bin; ci:bin; n ~= 1 -] ==?
?C'23(n, a, b, s, p, g, ci) Sat claS(n, a, b, s, p, g, ci)
In the overall design goal, the right-hand side of the definition has been instantiated
with a conditional whose alternatives ?C23 and ?C 0
are the respective
implementations for the base and the step case. The former is to be implemented
by proving subgoal 1 under the assumption the latter by proving subgoal
2 under the assumption n 6= 1. The base case is solved by the subdesign cla base
Figure
in a few simple steps.
In the step case we build the adder from two smaller adders of half the size:
We decompose the busses a, b and s each in a segment lower(n div 2; ) containing
the inferior n div 2 bits of the bus and a segment upper(n div 2+nmod 2; ), containing
the remaining bits. The lower (upper) segments of a and b can be added by
an adder of bit-width n div 2 (n div 2+nmod 2) yielding the lower (upper) segment
of the sum s. The propagate, generate and carry bit for the overall adder and
the carry flowing from the lower to the upper part of the adder are computed
by some additional circuitry collected in a new subdesign. Accordingly we can
reformulate the specification for the component cla as follows.
claim-tac
" EX p0:bin. EX g0:bin. EX p1:bin. EX g1:bin. EX c1:bin.
claS(n div 2, lower(n div 2, a), lower(n div 2, b), lower(n div 2, s), p0, g0, c) &
claS(n
The proof that this new specification entails the original is accomplished automatically
by a tactic we implemented that performs exhaustive case analysis
on the values of the carry, propagate and generate bits and performs simplification
of arithmetic expressions.
We decompose the new specification into its subparts and implement the
recursive occurrences of the specification claS by using the induction hypothesis.
This is done by applying design tac twice. The remaining specification carryS is
solved by a new subdesign, which is implemented as shown in Figure 4. Note
that we reuse the formerly developed aux here. Thus, all design goals are solved
and after 39 steps we are finished.
bus n) b:(bin bus n) s:(bin bus n) p:bin g:bin ci:bin.
?cla(n, a, b, s, p, g,
Wire p0:bin g0:bin p1:bin g1:bin c1:bin.
?cla(n div 2, lower(n div 2,a), lower(n div 2,b), lower(n div 2,s), p0, g0, ci) Join
(Port a:bin b:bin s:bin p:bin g:bin ci:bin.
(Wire w:bin. xor(a, b, w) Join xor(w, ci, s) Join or(a, b, p) Join and(a, b,
(Port p0:bin g0:bin p1:bin g1:bin c1:bin p:bin g:bin ci:bin .
(Port u:bin v:bin w:bin x:bin .
(ALL n:nnat(1). Port a:(bin bus n) b:(bin bus n) s:(bin bus n) p:bin g:bin ci:bin .
?cla(n, a, b, s, p, g, ci) Sat claS(n, a, b, s, p, g, ci))
In our definition context, ?H has become instantiated by four definitions.
There is a one to one correspondence between the implementations shown in
Figures
3 and 4 and the predicates defining them.
6 Comparison and Conclusion
We have combined two development methodologies: Formal Synthesis as implemented
in Veritas and resolution based synthesis in Isabelle. The result is a
simple realization of Formal Synthesis that is compatible with other approaches
to resolution based synthesis. Moreover, our implementation supports structural,
behavioral, and data-abstraction as well as independence from the concrete circuit
technology. Our implementation is based on a series of extensions to higher-order
logic and we were able to directly utilize standard Isabelle theories in our
work as well as Isabelle's simplification tactics. Most of our derived rules were
proven in one step proofs by Isabelle's classical prover.
The idea of using first-order resolution to build programs goes back to Green
in the 1960s [6]. More recently, within systems like Isabelle, interactive proof
by higher-order resolution has been used to construct verified programs and
hardware designs [1, 4, 10]. The work most closely related to ours is that of Mike
Fourman's group based on the Lambda system, which is a proof development
system that supports synthesis based on second-order resolution [5]. Motivated
by Isabelle, they too use rules in order to represent the design state. The difference
lies in the particular approach they use in proof construction; 1 instead
of using general purpose techniques as in Formal Synthesis, they derive (intro-
duction) rules for each component from its definition. These rules are applied
to the proof state in order to simplify the specification and thereby refine the
implementation. The specialized form of their rules supports a higher degree of
automation than general purpose techniques. Conversely, the generality of the
Formal Synthesis techniques provides a more abstract view of the design process
and better supports hierarchical development.
Just as we have extended Formal Synthesis with techniques for induction,
it is possible to adapt their methodology within our setting. We have carried
out some initial experiments which indicate both that we can use our Formal
Synthesis techniques to synthesize Lambda style design rules and that such rules
can be combined with techniques in circuit development. As a simple illustration,
suppose we have an axiom for an adder circuit given by
We can apply our techniques to synthesize a partial implementation for a schematic
specification Spec, which contains a subexpression of the form a
a:nat b:nat. ?Circ(a,b) Sat Spec(a+b)
After applying the techniques Inside, Claim, Reveal and Split, and solving the
proof obligation from Claim, we arrive at the following intermediate proof state.
a:nat b:nat.
(Wire s:nat.
1. !!a b s. [- a:nat; b:nat; s:nat -] ==?
After discharging the first subgoal by assuming it and removing the Port quan-
tifiers, we arrive at a Lambda-style design rule.
?a:nat; ?b:nat
Explaining this as a technique, it says that we can reduce a specification involving
the addition of a + b to one instead involving s. The validation tells us that the
1 In the end, there are only so many (700?) proof development systems, but there are
many more strategies for constructing proofs.
circuit built for Spec(s), when hooked up appropriately to an adder, builds a
circuit for the original specification. It would not be difficult to build tactics that
enable us to integrate such Lambda techniques with the others we developed,
taking advantage of the different strengths of these two approaches.
We conclude with a brief mention of deficiencies and future work. Currently,
the amount of information present during proof can be overwhelming. A short-term
solution is to instruct Isabelle to elide information; however, a graphical
interface, like that in Lambda would be of tremendous value both in displaying
designs and giving specifications. Another weakness is automation. We have
automated many simple kinds of reasoning by combining Isabelle's simplifiers
with case-analysis over binary values. The resulting tactics are effective, but
their execution is slow. There are decision procedures based on BDDs that can
more effectively solve many of these problems. We have started integrating one
of these with our synthesis environment, namely, a decision procedure for a
decidable monadic logic that is well-suited for modeling hardware [2]. We hope
that this is a step towards a synthesis framework in which different verification
methodologies may be integrated.
--R
Logic frameworks for logic programs.
Hardware verification using monadic second-order logic
Hardware verification using higher-order logic
Interactive program derivation.
Formal system design - interactive synthesis based on computer-assisted formal reasoning
Application of theorem proving to problem solving.
Formal synthesis of digital sys- tems
Computer Architecture
Isabelle : a generic theorem prover
--TR | higher-order logic;hardware verification and synthesis;higher-order unification;theorem proving |
333469 | A Subspace, Interior, and Conjugate Gradient Method for Large-Scale Bound-Constrained Minimization Problems. | A subspace adaptation of the Coleman--Li trust region and interior method is proposed for solving large-scale bound-constrained minimization problems. This method can be implemented with either sparse Cholesky factorization or conjugate gradient computation. Under reasonable conditions the convergence properties of this subspace trust region method are as strong as those of its full-space version.Computational performance on various large test problems is reported; advantages of our approach are demonstrated. Our experience indicates that our proposed method represents an efficient way to solve large bound-constrained minimization problems. | Introduction
. Recently Coleman and Li [1, 2, 3] proposed two interior and reflective Newton
methods to solve the bound-constrained minimization problem, i.e.,
min
algorithms are interior methods since the iterates fx k g are in the strict interior of the feasible region, i.e.,
ug. These two methods differ in that a line search to update iterates is used
in [2, 3] while a trust region idea is used in [1]. However, in both cases convergence is accelerated with
the use of a novel reflection technique.
The line search method version appears to be computationally viable for large-scale quadratic problems
[3]. Our main objective here is to investigate solving large-scale bound-constrained nonlinear
minimization problems (1.1), using a large-scale adaptation of the Trust-region Interior Reflective (TIR)
approach proposed in [1].
The TIR method [1], outlined in FIG. 1, elegantly generalizes the trust region idea for unconstrained
minimization to bound-constrained nonlinear minimization. Here g k
. The crucial
role of the (diagonal) affine scaling matrices D k and C k will become clear in x2.
An attractive feature of the TIR method [1] is that the main computation per iteration is solving a
Research partially supported by the Applied Mathematical Sciences Research Program (KC-04-02) of the Office of Energy
Research of the U.S. Department of Energy under grant DE-FG02-90ER25013.A000, and by NSF, AFOSR, and ONR through
grant DMS-8920550, and by the Advanced Computing Research Institute, a unit of the Cornell Theory Center which receives
major funding from the National Science Foundation and IBM Corporation, with additional support from New York State and
members of its Corporate Research Institute.
y Computer Science Department, Cornell University, Ithaca, NY 14850.
z Computer Science Department and Center for Applied Mathematics, Cornell University, Ithaca NY 14850.
The TIR Method [1]
For
1. Compute define the quadratic model
2. Compute a step s k , with based on the subproblem:
3. Compute
4. If ae k ? - then set x
5. Update D k as specified below.
Updating Trust Region Size D k
1. If ae k - then set D k+1 2 (0;
2. If ae k 2 (-;
3. If ae k - j then
if
set
otherwise,
set
FIG. 1. The TIR Method for Minimization Subject to Bounds
standard unconstrained trust region subproblem:
min
The method of Mor- e and Sorensen [4] can be directly applied to (1.2) if Cholesky factorizations of
matrices with the structure of H k can be computed efficiently. However, this method is unsuitable for
large-scale problems if the Hessian H k is not explicitly available or (sparse) Cholesky factorizations are too
expensive. Recently, Sorensen [5] proposed a new method for solving the subproblem (1.2) using matrix
vector multiplications. Nonetheless, the effectiveness of this approach for large-scale minimization,
particularly in the context of our trust region algorithm, is yet to be investigated.
We take the view that solving the full space trust region subproblem (1.2) is too costly for a large-scale
problem. This view is shared by Steihaug [6] who proposes an approximate (conjugate gradient)
approach. Steihaug's approach to (1.2) seems viable although our computational experience (see Table
indicates that important negative curvature information can be missed, causing a significant increase
in the number of minimization iterations.
In this paper, we propose an alternative: an approximate subspace trust region approach (STIR). We
verify that, under reasonable conditions, the convergence properties of this STIR method are as strong
as those of its full-space version. We explore the use of sparse linear algebra techniques, i.e., sparse
factorization and preconditioned conjugate gradients, in the context of this approach.
In addition, we demonstrate the benefits of our affine scaling, reflection and subspace techniques
with computational results. First, for (1.1), our affine scaling technique outperforms the classical Dikin
scaling [7], at least in the context of our algorithm. Second, we examine our method with and without
reflection. We show the reflection technique can substantially reduce the number of minimization itera-
tions. Third, our computational experiments support the notion that the subspace trust region method is
a promising way to solve large-scale bound-constrained nonlinear minimization problems. Compared to
the Steihaug [6] approach, the subspace approach is more likely to capture negative curvature information
and consequently leads to better computational performance. Finally, our subspace method is competitive
with, and often superior to, the active set method in LANCELOT [8].
The paper is organized as follows. In x2, we briefly summarize the existing TIR method. Then we
provide a computational comparison of the subspace trust region method and the Steihaug algorithm in
the context of unconstrained minimization in x3. We introduce a subspace method STIR, and discuss its
convergence properties, in x4. Issues concerning the computation of negative curvature directions and
inexact Newton steps are discussed in x5; computational results are provided indicating that performance
is typically not impaired by using an inexact Newton step. Concluding remarks appear in x7. The
convergence analysis of the STIR method is included in the appendix.
2. The TIR Method. In this section we briefly review the full-space TIR method [1], sketched in
FIG. 1. This method closely resembles a typical trust region method for unconstrained minimization,
min x2! n f(x). The key difference is the presence of the affine scaling (diagonal) matrices D k and C k .
Next we briefly motivate these matrices and the TIR algorithm.
The trust region subproblem (1.2) and the affine scaling matrices D k and C k arise naturally from
examining the first-order Kuhn-Tucker conditions for (1): if a feasible point l ! x is a local minimizer,
then x i is not at any of its bounds. This characterization is
expressed in the nonlinear system of equations
where
and the vector v(x) 2 ! n is defined below: for each 1 - i - n,
(i). If
(ii). If
(iii). If
(iv). If g i - 0 and l
The nonlinear system (2.1) is not differentiable everywhere; nondifferentiability occurs when v
Hence we avoid such points by maintaining strict feasibility, i.e., restricting x k 2 int(F). A Newton step
for (2.1) is then defined and satisfies
where
Here J v n\Thetan corresponds to the Jacobian of jv(x)j. Each diagonal component of the diagonal
matrix J v equals to zero or \Sigma1. If all the components of l and u are finite, J
we define J v
Equation (2.3) suggests the use of the affine scaling transformation: -
x. This transformation
reduces the constrained problem (1.1) into an unconstrained problem: a local minimizer of (1.1) corresponds
to an unconstrained minimizer in the new coordinates -
x (for more details, see [1]). Therefore a
reasonable way to improve x k is to solve the trust region subproblem
min
where
s:
s. Subproblem (2.5) is equivalent to the following problem in the original variable space:
min
where
In addition to the close resemblance to an unconstrained trust region method, the TIR algorithm has
strong convergence properties with explicit conditions on steps for optimality. We now describe these
conditions.
The TIR algorithm requires strict feasibility, i.e., x use ff
to denote the
step obtained from d k with a possible step-back for strict feasibility. Let -
k denote the minimizer along
d k within the feasible trust region, i.e., -
ff
The above definition implies that '
Explicit conditions which yield first and second-order optimality are analogous to those of trust region
methods for unconstrained minimization [1]:
Assume that p k is a solution to min s2! nf/ k and fi q and fi q
are two
positive constants. Then s
Condition (AS.3) is necessary for first-order convergence; (AS.4), together with (AS.3), is necessary
for second-order convergence. Both conditions (AS.3) and (AS.4) are extensions of convergence conditions
for unconstrained trust region methods. In particular, when assumptions
are exactly what is required of trust region methods for unconstrained minimization problems.
Satisfaction of both conditions (AS.3) and (AS.4) is not difficult. For example, one can choose s k
so that / k (s k ) is the minimum of the values /
However, this does not lead to
an efficient computation process. In [3] and [2], we have utilized a reflection technique to permit further
possible reduction of the objective function along a reflection path on the boundary. We have found in [3]
and [2] that this reflection process significantly enhances performance for minimizing a general quadratic
function subject to simple bounds.
x
s
pr
FIG. 2. Reflection Technique
For all the computational results in this paper, s k is determined from the best of three points
corresponding to /
k [p R
k denotes the piecewise direction path with p k
reflected on the first boundary it encounters, see FIG. 2.
We can appreciate the convergence results for this approach by observing the role of the affine scaling
matrix D k . For the components x i which are approaching the "correct" bounds, the sequence of directions
f\GammaD \Gamma2
becomes increasingly tangential to these bounds. Hence, the bounds will not prevent a large
step size along f\GammaD \Gamma2
k g from being taken. For the components x i which are approaching the "incorrect"
bounds, f\GammaD \Gamma2
points away from these bounds in relatively large angles (the corresponding diagonal
components of D k are relatively large and g k points away from these bounds). Hence, a reduction of at
least
implies the scaled gradient fD \Gamma2
converges to zero (i.e., first-order optimality).
The scaling matrix used in our approach is related to, but different from, the scaling typically used
in affine scaling methods for linear programming. The affine scaling matrix D affine
commonly used in affine scaling methods for linear programming, is formed from
the distance of variables to their closest bounds. Our scaling matrix D 2
k equals to D affine
only when
j. (Note that even in this case we employ the square root of the quantities
used to define D affine
.)
Before we investigate a subspace adaptation of TIR, we demonstrate the effectiveness of our reflection
idea and affine scaling technique. We consider random problem instances of molecule minimization
[9, 10], which minimize a quartic subject to bounds on the variables. Table 1 and 2 list the average
number of iterations (over ten random test problem instances for each entry) required for the different
techniques under comparison. The notation ? in front of a number indicates that the average number is at
least this number because the iteration number exceeds 1000, the maximum allowed, for some instance.
The details of the algorithm implementation are given in x6.
Table
1 demonstrates the significant difference made by a single reflection. The only difference
With Reflection 34.1 41.7 66.8 83.4 93.6
Reflection 71.4 >210.1 >425.4 >302.2 > 408.5
The STIR algorithm with and without reflection: number of iterations
100 200 400 800 1000
unconstrained: D k 38.6 47.3 61.4 72.7 93.6
D affine
D affine
k >517.4 >617.6 >517.3 >1000 >1000
Comparison of the STIR scaling Dk and Dikin scaling D affine
number of iterations
between the rows with and without reflection is the following. Without reflection, s k is determined by the
best of the two points based on
determined by the best of
the three points based on /
k [p R
(with reflection). The superiority of using the
reflection technique is clearly demonstrated with this problem.
In
Table
2, we compare the computational advantage of the selection D k over D affine
the only
difference is the scaling matrix. We differentiate between problems that have an unconstrained solution
(no bounds active at a solution) and those with a constrained solution. We observe that, for unconstrained
problems, there is no significant difference between the two scaling matrices. However, for the constrained
problems we tested, the choice D k is clearly superior. We observe that when D k is used, the number
of iterations for a constrained problem is roughly the same as that for the corresponding unconstrained
problem. For D affine
k , on the other hand, the number of iterations for a constrained problem is much larger
than for the corresponding unconstrained problem.
3. Approximation to the Trust Region Solution in Unconstrained Minimization. There are two
possible ways to approximate a full-space trust region solution in unconstrained minimization.
Byrd, Schnabel, and Schultz [11] suggest substituting the full trust region subproblem in the unconstrained
setting by
min
where S k is a low-dimensional subspace. (Our implementation employs a two-dimensional choice for
Another possible consideration for the approximation of (1.2) is the Steihaug idea [6], also proposed
in the large-scale unconstrained minimization setting. In a nutshell, Steihaug proposes applying the
method of preconditioned conjugate gradients (PCG) to the current Newton system until either negative
curvature is revealed, the current approximate solution reaches the boundary of the trust region, or the
Newton system residual is sufficiently reduced.
We believe that a subspace trust region approach better captures the negative curvature information
compared to the Steihaug approach [6]. To justify this we have conducted a limited computational study
in the unconstrained minimization setting.
We implement the subspace method with the subspace S k defined by the gradient direction g k and the
output of a Modified Preconditioned Conjugate Gradient (MPCG) method applied to the linear Newton
. The output is either an inexact Newton step s IN
defined by,
or a direction of negative curvature, detected by MPCG. Algorithm MPCG is given in greater detail in
FIG. 11,
Appendix
B. Our implementation of the Steihaug method can also be found in Appendix B.
Both the Steihaug and subspace implementations are wrapped in a standard trust region framework for
the unconstrained minimization problem. For both methods the preconditioning matrix used is
where G k is the diagonal matrix computed from G k
The same strategy is used to update D k (see x6 for more details). We let D
used for the subspace method and k \Delta k G for the Steihaug method ([6]).
We used twenty different unconstrained nonlinear test problems. All but four are test problems
described in [12], but with all the bound constraints removed. The problems EROSENBROCK and
EPOWELL are taken from [13]. The last two problems, molecule problems MOLE1 and MOLE3, are
described in [9, 10]. For all problems, the number of variables n is 260. The minimization algorithm
terminates when kgk We use the parameter in both FIG. 11 and FIG. 12.
Tables
3 and 4 compare the Steihaug and subspace methods described above in terms of the number of
minimization iterations and the total number of conjugate gradient (CG) iterations. Table 3 shows problems
for which negative curvature was not detected, and Table 4 shows problems for which negative curvature
was detected. Although not included here, the function values and gradient norms (upon termination) were
virtually the same for both methods for all problems. Since these values were essentially the same among
the two methods, we only discuss the difference in iterations counts. The difference in minimization and
CG iteration counts is plotted in FIG. 3 and FIG. 4.
Most notable in Table 3 and the graphs of FIG. 3 is how strikingly similar the results are for the
Steihaug and subspace methods; the minimization with each method stops within two iterations of the
other in all cases. Furthermore, both methods take an identical number of total CG iterations except for
the problem BROWN1 where the Steihaug method takes four more iterations. When negative curvature
is encountered, shown in Table 4 and in FIG. 4, the iteration counts for each method are again similar
for a few problems. For most problems, however, the Steihaug method takes more iterations, and for
some problems the difference is substantial. This is particularly true for the problems CHAINWOOD,
MOLE1 and MOLE3 (for CHAINWOOD, problem 3 in FIG. 4, the total difference in iteration counts is
Minimization CG
Problem Subspace Steihaug Subspace Steihaug
1. BROWN1 27 29 39 43
2. BROWN3 6 6 6 6
3. BROYDEN1A 11 11 81 81
4. BROYDEN1B 5 5 34 34
5. BROYDEN2B 7 7 71 71
6. CHAINSING 22 22 188 188
7. CRAGGLEVY 21 21 125 125
8. DEGENSING 22 22 188 188
9. EPOWELL
10. GENSING 22 22 83 83
11. TOINTBROY 7 7 58 58
12. VAR 43 43 5590 5590
Comparison when only positive curvature is encountered: number of iterations
Minimization CG
Problem Subspace Steihaug Subspace Steihaug
1. AUGMLAGN 36 29 267 228
2. BROYDEN2A 22 19 247 196
3. CHAINWOOD 156 988 3905 3878
4. EROSENBROCK 44 46 52 86
5. GEROSE 23 33 166 165
6. GENWOOD 58 63 304 275
7. MOLE1 46 119 460 376
8. MOLE3 125 186 6311 5356
Comparison when negative curvature is encountered: number of iterations
positive curvature problems
minimization
iterations
excess Steihaug iterations
excess subspace iterations
positive curvature problems
iterations
FIG. 3. Comparison of subspace and Steihaug trust region methods for unconstrained problems
negative curvature problems
minimization
iterations
negative curvature problems
iterations
FIG. 4. Comparison of subspace and Steihaug trust region methods for unconstrained problems
explicitly noted as it is beyond the scale of the graph). In general the subspace method does take more CG
iterations on problems with negative curvature, but it is these extra relatively inexpensive CG iterations
that reduce the total number of minimization iterations. (Again, for the problem MOLE3 the difference
in CG iterations is explicitly noted in FIG. 4 as it is beyond the scale of the graph.)
A closer examination of the behavior of the two algorithms indeed shows that when negative curvature
is not encountered, both methods take similar steps. (In this case, if the trust region is large enough,
both methods in FIG. 11 and FIG. 12 will stop under the same conditions after the same number of
CG iterations, as displayed in Table 3.) By the nature of the algorithms, if the Steihaug method detects
negative curvature, then so will the subspace approach. However if the subspace algorithm detects
negative curvature, the Steihaug method may terminate before it finds negative curvature; and then it does
not converge (to a local minimizer) as quickly as the subspace method. The important role that negative
curvature plays is supported by the fact that the subspace method often moves in a substantial negative
curvature direction when the Steihaug method overlooks negative curvature. Furthermore, it is when the
trust region radius D k is small that the Steihaug method is most likely to stop early and miss negative
curvature. Thus it appears that the effectiveness of the Steihaug idea decreases as nonlinearity increases.
4. The STIR Method. Supported by the discussion in x3, we propose a large-scale subspace adaptation
of the TIR method [1] for the bound constrained problem (1.1).
In moving from the unconstrained subspace approach to the box-constrained setting, it seems natural
to replace the full trust region subproblem (1.2) by the following subspace subproblem
min
where S k is a small-dimensional subspace in ! n , e.g., a two-dimensional subspace. A two-dimensional
subspace for the trust region subproblem (2.5) can be selected from the span of the two vectors
k g and a negative curvature direction -
k . This suggests that we form S k from the
directions fD \Gamma2
g. Will such subspace formulations succeed in achieving optimality?
We examine this issue in more detail.
It is clear that the including the scaled gradient direction D \Gamma2
k in S k , and satisfying (AS.3), will
guarantee convergence to a point satisfying the first-order optimality conditions. Let us assume for now
that fx k g converges to a first-order point x . To guarantee that x is also a second-order point the
following conditions must be met.
Firstly, it is clear that a "sufficient negative curvature" condition must be carried over from the
unconstrained setting [14]. To this end, we can require that sufficient negative curvature of the matrix -
be captured if it is indefinite, i.e., S k must contain a direction w
w k such that
Secondly, it is important that a solution to (4.1) lead to a sufficiently large step - the potential
difficulty is running into a (bound) constraint immediately. This difficulty can be avoided if the stepsize
sequence, along the trust region solution direction, is bounded away from zero. Subsequently, we define:
DEFINITION 4.1. A direction sequence fs k g has large-step-size if lim inf k!1 jD 2
If fast local convergence is desired then the subspace S k should also contain a sufficiently accurate
approximation to the the Newton direction D \Gamma1
M k is positive definite and -
k . An
inexact Newton step -
k for problem (1.1) is defined as an approximate solution to
with accuracy
s IN
such that kr k k=k -
Can we select two-dimensional subspaces satisfying all three properties and thus guarantee quadratic
(superlinear) convergence to a second-order point? The answer, in theory, is yes - the subspace adaptation
of TIR algorithm (STIR) in FIG.5 is an example of a subspace method capable of achieving the desired
properties.
To ensure convergence to a solution, the solution sequence of the subspace trust region subproblems
(4.1) need to have large-step-size. Lemma 1 below indicates that this can be achieved if we set S
are two sequences of uniformly independent vectors in the sense
that lim inffkz each with large-step-size.
LEMMA 1. Assume that fw k g and fz k g have large-step-size with kD k w
Moreover, lim inf k!1 fkz the solution sequence fp k g to the subproblem (4.1) with
has large-step-size.
Proof. The proof is very straightforward and is omitted here.
For the STIR method, a natural extension of the condition (AS.4) necessary for second-order optimality
is the following.
Assume that p k is a solution to min s2! nf/ k and fi q and
are two positive constants. Then s
Theorem 2 below, with the proof provided in the Appendix, formalizes the convergence properties
of STIR.
THEOREM 2. Let the level set be compact and f
be twice continuously differentiable on L. Let fx k g be the sequence generated by the STIR algorithm in
FIG.5. Then
1. If (AS.3) is satisfied, then the Kuhn-Tucker condition is satisfied at every limit point.
The STIR Method
For
1. Compute define the quadratic model
2. Compute a step s k , with based on the subspace
subproblem,
where the subspace S k is set up as below.
3. Compute
4. If ae k ? - then set x
5. Update D k as specified in FIG.1.
Determine Subspace
[Assume that w
has large-step-size. Let
small positive constant.]
M k is positive definite
k is not positive definite
IF (D \Gamma2
(D \Gamma2
END
END
FIG. 5. The STIR Method for Minimization Subject to Bound Constraints
2. Assume that both (AS.3) and (AS.5) are satisfied and -
w k in FIG. 5 contains sufficient negative
curvature information whenever -
M k is indefinite, i.e.,
with
(a) If every limit point of fx k g is nondegenerate, then there is a limit point x at which
both the first and second-order necessary conditions are satisfied.
(b) If x is an isolated nondegenerate limit point, then both the first and second-order
necessary conditions are satisfied at x .
(c) If -
is nonsingular for some limit point x of fx k g and -
M k is
positive definite, then -
is positive definite, fx k g converges to x , all iterations are
eventually successful, and fD k g is bounded away from zero.
The degeneracy definition is the same as in [1].
DEFINITION 4.2. A point x 2 F is nondegenerate if, for each index i:
We have established that in principle it is possible to replace the full-dimensional trust region
subproblem with a two-dimensional variation. However, the equally strong convergence properties of
STIR hinges on obtaining (guaranteed) sufficient negative curvature direction with large-step-size. We
discuss this next.
5. Computing Negative Curvature Directions with Large-Step-Size. Is it possible, in principle,
to satisfy both the sufficient negative curvature requirement (4.2) and the large-step-size property? The
answer is yes: let u k be a unit eigenvector of -
k corresponding to the most negative eigenvalue, i.e.,
. It is easily verified that for any convergent subsequence lim k!1 - min ( -
the sequence fD \Gamma1
has large-step-size.
However, it is not computationally feasible to compute the (exact) eigenvector u k . Therefore,
approximations, and short cuts, are in order. Can we compute approximate eigenvectors with large-step-
A good approximation to an eigenvector corresponding to an extreme eigenvalue can usually be
obtained through a Lanczos process [15]. Using the Lanczos method for -
k with an initial vector -
approximate eigenvectors at the j-th step are computed in the Krylov space
In the context of our algorithm, the vectors D \Gamma1
are natural choices for the initial vector
when applying the Lanczos method.
Our key observation is the following. If a sequence fD \Gamma1
k g has large-step-size then each sequence
in D \Gamma1
retains this property.
Now assume that -
w k is the computed vector from the Lanczos method which contains the sufficient
negative curvature information with respect to -
k . It can be verified, based on the recurrence relation,
that fD \Gamma1
k g all have large-step-size if the Lanczos vectors f -
retain orthogonality.
w k is in the Krylov space K( -
it is clear that fw
has large-step-size. In other
words, in order to generate a negative curvature direction sequence with large-step-size, orthogonality
needs to be maintained in the Lanczos process. Fortunately, as discussed in [16], it is quite reasonable
to assume that until all of the distinct eigenvalues of the original matrix have been approximated well,
orthogonality of the Lanczos vectors are well maintained. Since we are only interested in a direction with
sufficient negative curvature, we expect that it can be computed before loss of orthogonality occurs.
A second (and cheaper) strategy is to employ a modified preconditioned conjugate gradient scheme,
e.g., MPCG in FIG.12. Unfortunately, this process is not guaranteed to generate sufficient negative
curvature; nonetheless, as indicated in [17], the MPCG output will satisfy the large-step-size property.
Finally we consider a modified Cholesky factorization, e.g., [18], to obtain a negative curvature
direction. Assume that f -
is indefinite and fd k g is obtained from the modified Cholesky method. We
demonstrate below that fd
d k g has large-step-size under a nondegeneracy assumption.
The negative curvature direction -
computed from the modified Cholesky method (see
[18], page 111) satisfies
where L k is a lower triangular matrix, P k is a permutation matrix and e j k
is the j k th elementary vector,
(j is a bounded and non-negative diagonal matrix.
Without loss of generality, we assume that P I .
We argue, by contradiction, that fd k g has the large-step-size property. Assume that fd k g does not
have this property. From L T
and that L k is a lower triangular matrix with unit diagonals, it is
clear that -
Moreover, from -
definition (2.4) of -
k , the first components of fD k
are bounded. This implies that fv j k
converges to zero.
From the modified Cholesky factorization, the matrix -
is indefinite but -
is positive definite. But this is impossible for sufficiently large k because, again using the definition (2.4)
of -
converges to a matrix of the form
positive (because of the nondegeneracy assumption). Therefore, we conclude that fd k g has
large-step-size.
6. Computational Experience. We demonstrate the computational performance of our STIRmethod
given in FIG.5. Below we report our experience with the modified Cholesky and the conjugate gradient
implementations. We examine the sensitivity of the STIR method to a starting point. Finally,
some limited comparisons with SBMIN of LANCELOT [8] are also made.
In the implementation of STIR, we compute s k using a reflective technique as shown in FIG.2. The
exact trust region updating procedure is given below in FIG.6.
Updating Trust Region Size D k
1. If ae k - 0 then set D
2. If ae k 2 (0; -] then set D
3. If ae k 2 (-; j) then set D
4. If ae k - j then
if
otherwise
FIG. 6. Updating Trust Region Size
Our experiments were carried out on a Sun Sparc workstation using the Matlab environment.
The stopping criteria used are as follows. We stop if
either
or
or no negative curvature has been detected for -
We define
We also impose an upper bound of 600 on the number
of iterations.
We first report the results of the STIR method using the modified Cholesky factorization. Table 5
lists the number of iterations required for some standard testing problems (for details of these problems
see [12]). (For all the results in this paper, the number of iterations is the same as the number of objective
function evaluations.) The problem sizes vary from 100 to 10; 000. The results in Table 5 indicate that, for
these testing problems at least, the number of iterations increases only slightly, if at all, with the problem
size. Moreover, in comparison to the unconstrained problems, the presence of the bound restrictions does
not seem to increase the number of iterations. This is depicted pictorially in FIG. 7. In this graph, the
problem size is plotted versus iteration count. For each problem, the corresponding points have been
connected to show how the iteration count relates to the problem size.
Our second set of results are for the STIR algorithm but using a conjugate gradient implementation.
We use the algorithm MPCG in FIG.12 to find the directions needed to form the subspace S k . The
stopping condition applied to the relative residual in MPCG is 0:005. The results are shown in
Table
6 and FIG. 8. Again, for these problems the iteration counts are low and steady. The exception
is for the problem VAR C with 10; 000 variables, where the iteration count jumps to 86. This is one of
Problem 100 200 500 1000 10000
GENROSE C 11 11
GENSING U 24 25 25 26 27
GENSING C
DEGENSING C 28 28 28 28 29
GENWOOD
BROYDEN2A C 14 19 17 19 19
CRAGGLEVY U
VAR C
STIR method with exact Newton steps: number of iterations
problem size
iterations
FIG. 7. STIR performance with exact Newton steps
100 1000 1000050problem size
iterations
FIG. 8. STIR method with inexact Newton steps
several degenerate problems included in this test set. With a tighter bound j on the relative residual in
MPCG, we could decrease the number of minimization iterations for this problem (note that the STIR
with exact Newton steps only takes 38 iterations). However, this change would also increase the amount
of computation (conjugate gradient iterations).
Next we include some results which indicate that our STIR method is fairly insensitive to the starting
point. The results in Table 7 were obtained using exact Newton steps on problems of dimension 1000.
The results in Table 8 were obtained using the conjugate gradient implementation, also on problems with
1000 variables. The starting points are as follows: original is the suggested starting point according
to [12]; upper starts all variables at upper bounds; lower starts all variables at the lower bounds; middle
starts at the midpoint between bounds; zero starts each variable at zero (the origin); upper-lower starts the
odd variables at the upper and the even variables at the lower bounds; lower-upper is the reverse of this.
For all of these, we perturb the starting point slightly if necessary to be strictly feasible. Note that for the
problem BROWN3 C, the iteration count is not shown starting at middle and at origin as the gradient is
undefined at both these starting points. These results are also shown graphically in FIG. 9 and FIG. 10.
From these graphs it is clear that both implementations of STIR are fairly robust when it comes to starting
Problem 100 200 500 1000 10000
GENROSE U 21 21 21
GENSING U 23 23 24 24 25
GENSING C
CHAINSING U 21 21 21
CRAGGLEVY U 26 26
CRAGGLEVY C 26 26 26 26 27
STIR method with inexact Newton steps, krk=kgk - 0:005: number of iterations
Starting Point
Problem original upper lower middle zero up-low low-up
GENROSE C 11 27 33 15 16 43 27
CRAGGLEVY C 26 26 34 37
STIR method with exact Newton steps for number of iterations
points. This is in contrast to active set methods where the starting point can have a more dramatic effect
on the iteration count.
Last we contrast the performance of the STIR method using the conjugate gradient option with the
SBMIN algorithm, an active set method, in the LANCELOT software package [8]. In particular, we
choose problems where negative curvature is present or where it appears that the "active set" at the
solution may be difficult to find. We expect our STIR method to outperform an active set method in these
situations; indeed, we have found this to be the case. For these problems, we use the default settings for
LANCELOT and adjusted our STIR stopping conditions to be comparable if not more stringent.
First consider a constrained convex quadratic problem. The results, given in Table 9, show that our
proposed STIR method is markedly superior (by an order of magnitude) to SBMIN on this problem (c.g. it
is the total number of conjugate gradient iterations). SBMIN takes many iterations on this problem when
the starting point is near some of the bounds - the method mis-identifies the correct active set at the
solution and takes many iterations to recover. Our proposed STIR method, a strictly interior method,
moves directly to the solution without faltering when started at the same point.
Table
summarizes the performances of STIR and SBMIN, on a set of constrained problems
exhibiting negative curvature. (Again the problems are from [12] except the last two have been constrained
differently to display negative curvature.) STIR is significantly better on these problems - this is probably
due to the fact that negative curvature is better exploited in our subspace trust region approach than in
Starting Point
Problem original upper lower middle zero up-low low-up
GENSING C
28
DEGENSING C 33 43 37 42 37 37 44
GENWOOD C
28
STIR method with inexact Newton steps for number of iterations
orig up low mid zero up-lo lo-up50iterations
FIG. 9. STIR method with exact Newton steps at varied starting points
inexact STIR SBMIN
iteration c.g. it iteration c.g. it
STIR with inexact Newton steps vs. LANCELOT SBMIN on a convex quadratic: number of iterations
orig up low mid zero up-lo lo-up50iterations
FIG. 10. STIR method with inexact Newton steps at varied starting points
inexact STIR SBMIN
Problem 100 1000 10000 100 1000 10000
AUGMLAGN U 34
GENWOOD U 62 67 63 439 952 554
GENWOOD NC
STIR with inexact Newton steps vs. LANCELOT SBMIN when negative curvature exists: number of iterations
the Steihaug trust region method, which SBMIN employs. This is consistent with results presented in x3,
e.g., see
Table
4.
7. Conclusion. Based on the trust-region interior reflective (TIR) method in [1], we have proposed
a subspace TIR method (STIR) suitable for large-scale minimization with bound constraints on the
variables. In particular, we consider a two-dimensional STIR in which a subspace is formed from the
scaled gradient and (inexact or exact) Newton steps or a negative curvature direction.
We have designed and reported on a variety of computational experiments. The results strongly
support the different components of our approach: the "subspace idea", the use of our novel affine
scaling matrix, the modified Cholesky factorization and conjugate gradient variations, and the "reflection
technique". Moreover, preliminary experimental comparisons with code SBMIN, from LANCELOT [8],
indicate that our proposed STIR method can significantly outperform an active-set approach for some
large-scale problems.
--R
An interior
On the convergence of reflective Newton methods for large-scale nonlinear minimization subject to bounds
A reflective Newton method for minimizing a quadratic function subject to bounds on the variables.
Minimization of a large scale quadratic function subject to an ellipsoidal constraint.
The conjugate gradient methods and trust regions in large scale optimization.
Iterative solution of problems of linear and quadratic programming.
LANCELOT: A Fortran Package for Large-Scale Nonlinear Optimization (Release
The molecule problem: Determining conformation from pairwise distances.
A family of trust-region-based algorithms for unconstrained minimization with strong global convergence properties
Testing a class of methods for solving minimization problems with simple bounds on the variables.
Approximate solution of the trust region problem by minimization over two-dimensional subspaces
Matrix Computations.
Lanczos Algorithms for Large symmetric eigenvalue computations
Inexact Reflective Newton Methods for Large-Scale Optimization Subject to Bound Constraints
Practical Optimization.
--TR
--CTR
Detong Zhu, A new affine scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints, Journal of Computational and Applied Mathematics, v.161 n.1, p.1-25, 1 December
Jiaju Zheng , Shuying Cao , Hongli Wang , Wenmei Huang, Hybrid genetic algorithms for parameter identification of a hysteresis model of magnetostrictive actuators, Neurocomputing, v.70 n.4-6, p.749-761, January, 2007
L. N. Vicente, Local Convergence of the Affine-Scaling Interior-Point Algorithm for Nonlinear Programming, Computational Optimization and Applications, v.17 n.1, p.23-35, Oct. 2000
Amit Jain , David Blaauw, Slack borrowing in flip-flop based sequential circuits, Proceedings of the 15th ACM Great Lakes symposium on VLSI, April 17-19, 2005, Chicago, Illinois, USA
Manfred Weiler , Ralf Botchen , Simon Stegmaier , Thomas Ertl , Jingshu Huang , Yun Jang , David S. Ebert , Kelly P. Gaither, Hardware-Assisted Feature Analysis and Visualization of Procedurally Encoded Multifield Volumetric Data, IEEE Computer Graphics and Applications, v.25 n.5, p.72-81, September 2005
R. Deng , P. Davies , A. K. Bajaj, A nonlinear fractional derivative model for large uni-axial deformation behavior of polyurethane foam, Signal Processing, v.86 n.10, p.2728-2743, October 2006 | interior method;trust region method;box constraints;bound-constrained problem;inexact Newton step;conjugate gradients;negative curvature direction |
333474 | Pivoted Cauchy-Like Preconditioners for Regularized Solution of Ill-Posed Problems. | Many ill-posed problems are solved using a discretization that results in a least squares problem or a linear system involving a Toeplitz matrix. The exact solution to such problems is often hopelessly contaminated by noise, since the discretized problem is quite ill conditioned, and noise components in the approximate null-space dominate the solution vector. Therefore we seek an approximate solution that does not have large components in these directions. We use a preconditioned conjugate gradient algorithm to compute such a regularized solution. A unitary change of coordinates transforms the Toeplitz matrix to a Cauchy-like matrix, and we choose our preconditioner to be a low rank Cauchy-like matrix determined in the course of Gu's fast modified complete pivoting algorithm. We show that if the kernel of the ill-posed problem is smooth, then this preconditioner has desirable properties: the largest singular values of the preconditioned matrix are clustered around one, the smallest singular values, corresponding to the lower subspace, remain small, and the upper and lower spaces are relatively unmixed. The preconditioned algorithm costs only O(n lg n) operations per iteration for a problem with n variables. The effectiveness of the preconditioner for filtering noise is demonstrated on three examples. | Introduction
. In fields such as seismography, tomography, and signal pro-
cessing, the process describing the acquisition of data can often be described by an
integral equation of the first kind
Z fi up
lo
where t denotes the kernel, -
f the unknown input function, and - g the output. When
it is appropriately discretized, the equation becomes a system of n linear equations of
the form
g:
In applications, properties of the kernel and the discretization process often cause T
to have a Toeplitz structure; that is,
constant along diagonals.
The discrete inverse problem is to recover -
f , given - g and T . However, the continuous
problem is generally ill-posed: i.e. small changes in - g cause arbitrarily large
changes in -
f . This is reflected in the discrete problem by ill-conditioning in the matrix
T . The recovery of -
f then becomes a delicate matter since the recorded data will
This work was supported by the National Science Foundation under Grant CCR 95-03126.
y Applied Mathematics Program, University of Maryland, College Park, MD 20742
z Department of Computer Science and Institute for Advanced Computer Studies, University of
Maryland, College Park, MD 20742 (oleary@cs.umd.edu).
likely have been contaminated by noise e. In this case, we have measured g rather
than - g, where
(1)
Due to the ill-conditioning of T and the presence of noise, exact solution of the linear
system will not lead to a reasonable approximation of -
f . Rather, regularization is
needed in order to compute an approximate solution f . Regularization can be thought
of as exchanging the original, ill-posed problem for a more well-posed problem whose
solution approximates the true solution. Many regularization methods, both direct
and iterative, have been discussed in the literature; see, for example, [12, 15, 9, 5]. In
this paper we will primarily be concerned with regularization via conjugate gradient
iterations [7, 22, 29], where the regularization parameter is the number of iterations.
Toeplitz matrices have several properties convenient for iterative methods like
conjugate gradients: multiplication of a Toeplitz matrix times a vector can be done in
O(n lg n) operations, and circulant preconditioners can be quite efficient [25, 3]. There
are some difficulties, though. The inverse of a Toeplitz matrix does not generally have
Toeplitz structure, and the fast factorization algorithms for Toeplitz matrices can
require as much as O(n 3 ) flops if pivoting is used to improve stability; see [27, 11, 4],
for example.
To overcome these difficulties, we make use of the fact that Toeplitz matrices are
related to Cauchy-like matrices by fast orthogonal transformations [17, 8, 10]. Cauchy-
like matrices, discussed in detail in x2, permit fast matrix-vector multiplication. But,
in contrast to Toeplitz matrices, the inverse of a Cauchy-like matrix is Cauchy-like,
and complete pivoting can be incorporated in its LDU factorization at a total cost of
The focus of this paper is the development of a Cauchy-like preconditioner that
can be used to accelerate convergence of the conjugate gradient iteration to a filtered
approximate solution of a problem involving a Toeplitz matrix. The regularizing
properties of conjugate gradients and our choice of preconditioner are discussed in x3.
Each iteration of our algorithm takes O(n lg n) operations, and computational issues
are discussed in x4. Section 5 contains numerical results and x6 presents conclusions
and future work.
2. Transformation from Toeplitz to Cauchy-like structure. A Cauchy-
like, or generalized Cauchy, matrix C has the form
' a T
1-i;j-n
(2)
It can also be defined as the unique solution of the displacement equation
where
a
The pair (A; B) is the generator of C with respect
to\Omega and \Theta, and ' - n is called
the displacement rank. For the matrices and displacement equations of interest here,
We exploit three important properties of Cauchy-like matrices.
Property 1. Row and column permutations of Cauchy-like matrices are Cauchy-
like, as are leading principal submatrices.
This property allows pivoting in fast algorithms for factoring Cauchy-like matrices
[17, 8].
Property 2. The inverse of a Cauchy-like matrix is Cauchy-like:
1-i;j-n
Heinig [17] gives an O(n lg 2 n) algorithm to compute X (with rows x T
(with rows w T
and\Omega\Gamma and explains how, using the FFT, a system
involving a Cauchy-like matrix can be solved in O(n lg 2 n). However, the algorithm
is very fragile. It can be unstable for large values of n and, even when used on a well
conditioned matrix, may require pivoting to maintain stability [18, 1]. Alternatively,
X and W can be determined from the relations
The third important property is that Toeplitz matrices also satisfy certain displacement
equations [21, 8] which allow them to be transformed via fast Fourier
transforms into Cauchy-like matrices [17, 8]:
Property 3. Every Toeplitz matrix T satisfies an equation of the form
R
The Toeplitz matrix T is orthogonally related to a Cauchy-like matrix
that satisfies the displacement equation
where
and F is the normalized inverse discrete Fourier transform matrix defined by
exp
(j
'-
1-j;k-n
Gohberg, Kailath, and Olshevsky [8] suggest a stable O('n 2 ) partial pivoting
algorithm to factor . Sweet and Brent [26] show, however, that element
growth in this algorithm depends not only on the magnitude of L and U , but on the
generator for the Cauchy-like matrix. For our test matrices, partial pivoting alone
did not provide the rank revealing information that we need.
Gu [10] presents an algorithm that can perform a fast O('n 2 ) variation of LU
decomposition with complete pivoting. Recall that in complete pivoting, at every
elimination step one chooses the largest element in the current submatrix as the
pivot in order to reduce element growth. Gu proposes instead that one find an entry
sufficiently large in magnitude by considering the largest 2-norm column of -
corresponding to the part that remains to be factored at each step. This algorithm
computes the factorization Alg. 2] using only the readily determined
generators (see x4), and Gu shows that it is efficient and numerically stable, provided
that element growth in the computed factorization is not large. For our purposes it
was convenient to set to obtain the equivalent
factorization
3. Regularization and preconditioning. If we wanted to solve the linear system
exactly, we would be finished: using the transformation to Cauchy-like
form and the fast factorization algorithms described above, computing this solution
would be an easy task. But the solution we seek is an approximate one, having noise
filtering properties, so we choose to use an iterative method called CGLS which, in
conjunction with an appropriate preconditioner, produces suitably filtered solutions.
Three assumptions will guide our analysis:
1. The matrix T has been normalized so that its largest singular value is of order
1.
2. The uncontaminated data vector -
satisfies the discrete Picard condition; i.e.,
the spectral coefficients of -
decay in absolute value like the singular values
[30, 14].
3. The additive noise is zero-mean white Gaussian. In this case, the components
of the error e are independent random variables normally distributed with
mean zero and variance ffl 2 .
We need to define the signal and noise subspaces. Using (1), let
V T be
the singular value decomposition of T , and expand the data and the noise in the basis
created by the columns of -
with
e. Under the white noise assumption, the coefficients j i
are roughly constant in size, while the discrete Picard condition tells us that the - fl i
go to zero at least as fast as the singular values oe i . Thus, components for which - fl i
is of the same order as j i are obscured by noise. Let m be such that j-fl i j AE jj i j for
we say that the
last
m columns of -
span the noise subspace, while the other columns span the
signal subspace. The basis for the signal subspace is further partitioned into the first
columns and the remaining -
which correspond to a transition subspace that
is generally difficult to resolve unless there is a gap in the singular value spectrum.
3.1. Regularization by preconditioned conjugate gradients. The standard
conjugate gradient (CG) method [19] is an iterative method for solving systems
of linear equations for which the matrix is symmetric positive definite. If the matrix
is not symmetric positive definite, one can use a variant of standard CG which
solves the normal equations in factored form. We refer to the resulting algorithm as
CGLS [19]. If the discrete Picard condition holds, then CGLS acts as an iterative
regularization method with the iteration index taking the role of the regularization
parameter [7, 13, 15]. Convergence is governed by the spread and clustering of the
singular values [28]. Therefore, preconditioning is often applied in an effort to cluster
the singular values, thus speeding convergence.
In the context of an ill-conditioned matrix T , we require a preconditioner for
CGLS which clusters the largest m singular values while leaving the small singular
values, and with them, the noise subspace, relatively unchanged. In this case, the first
few iterations of CGLS will quickly capture the solution lying within the subspace
spanned by the first m columns of V . A modest number of subsequent iterations will
provide improvement over the transition subspace, without significant contamination
from the noise subspace.
3.2. The preconditioner. Given the Toeplitz matrix T , let ~
its corresponding Cauchy-like matrix. Solving T is then equivalent to solving
~
Note that since F and S 0 are unitary matrices, then
~
that is,
C have the same singular values, and there is no mixing of
signal and noise subspaces.
A factorization of ~
C using a modified complete pivoting strategy may lead to an
interchange of rows (specified by a permutation matrix P ) and columns (specified by
a permutation matrix Q). Setting
the
problem we wish to solve is
We choose a preconditioner M for the left so that
and apply CGLS to the corresponding normal equations
Our choice of preconditioner M is derived from the leading m \Theta m submatrix of
Gu's modified complete pivoting LDU factorization of the matrix C as follows. Let
and write this equation in block form, where the upper left blocks are
m \Theta m:
Here are lower triangular, U 1 ; U 3 are upper triangular, and D 1 and D 2 are
diagonal. We choose as our preconditioner the matrix
I
I
I
3.3. Properties of the preconditioner. We begin with some theorems about
the clustering of the singular values of M \Gamma1 C. It is useful to decompose the matrix
into the matrix sum
using the block partitioning of the previous section.
be the sum of the absolute values of the entries in row i of C
be the largest of these quantities, and let - s be the largest such row sum for E 2 . The
case of interest to us is when these quantities are reasonably small.
We denote the k-th largest singular value of a matrix Z by oe k (Z), and the k-th
largest eigenvalue by - k (Z).
Theorem 3.1. If kC \Gamma1
largest singular values of M \Gamma1 C lie
in the interval
\Theta
Proof: The upper bound can be obtained by applying Gershgorin's theorem
[24][IV.2.1] to bound the eigenvalues of the matrix (M \Gamma1 C) (M \Gamma1 C), and then taking
square roots. The lower bound is somewhat more interesting.
The matrices E 1 and E 2 are Hermitian positive semidefinite, and from the representation
it is clear that they have rank at most m and
By Corollary IV.4.9 [24], we know that
We need to show that - k are two n\Thetan matrices and the rank
of Y 2 is n\Gammam then a theorem of Weyl [20, Thm. 3.3.16] implies oe n (Y 1
Now set
I C \Gamma1
I
and notice that the eigenvalues of E 1 are the squares of the singular values of Y 1 . But
is the n \Theta n identity matrix, so by Weyl's result we obtain oe m (Y 1 ) - 1. Thus,
our conclusion follows from (12). 2
We now study the extent to which preconditioning by M mixes the signal and
noise subspaces.
Theorem 3.2. Let k be the dimension of the noise subspace and let
n\Thetam . Then
oe n\Gammak+1
Proof: Using the decompositions we have
have orthonormal columns, it follows that
oe n\Gammak+1
oe n\Gammak+1
Next we show that - oe j - oe j for oe j corresponding to the noise subspace, and thus
oe n\Gammak+1 is small. Thus, if C 1 is well-conditioned, then we are guaranteed that the
signal and noise subspaces remain unmixed.
Theorem 3.3. The (m+i)th singular value of each of the matrices C and M \Gamma1 C
lies in the interval [0; oe i
Proof: Two theorems due to Weyl for Hermitian matrices Z, Y 1 , and Y 2 with
Now from the decomposition in Equation (11), we see -n
0, and thus
Also,
We therefore likewise obtain
The proof is completed by taking square roots. 2
These theorems show that the preconditioner will be effective if C 1 is well-conditioned
and if the row sums of C
are small. We now discuss to
what extent these conditions hold for integral equation discretizations.
Property 4. Let ~
C be a Cauchy-like matrix corresponding to a real Toeplitz
matrix T that results from discretization of a smooth kernel t, normalized so that
the maximum element of T is one. Then for n sufficiently large, there exists ffl - 1
and m - n such that all elements of ~
C are less than ffl in magnitude except for those
located in four corner blocks of total dimension m \Theta m.
To understand why this is true, recall that if ~
A and ~
B are the generators of ~
where ~
A ~
, the magnitude of the (k; j)-entry of ~
C is j ~
. Thus the
largest entries in ~
appear where the numerator is large or the denominator is small.
Elements >
Elements
Elements > 3
Elements
Fig. 1. Plot revealing
The denominator of ~
C kj is j!
which is bounded above
by 2. Its smallest entries are attained for jk \Gamma jj - 0 or n, but there are very few
small values. In fact, direct computation shows that for n - 100, at least 95% of the
entries in the first row have denominators in the range [10 and the other rows
have even more in this range. Figure 1 plots values of the matrix
above given tolerance levels. As expected, there are very few large values,
and these occur only near the diagonal and the corners of the matrix.
Now consider the numerators. The formulas for A and B are determined from
direct computation in (6). The first column of A is the first unit vector, and the
second column is given by
The first column of B is
and the second column is the last unit vector. The generators for ~
C are then ~
and ~
conjugation, with F and S 0 as
described in Property 3. Therefore, the numerators are
e (n\Gamma1)
is the k th entry in the second column of ~
A and i j is the j th entry in the
first column of FS 0 B. Thus it is the normalized inverse Fourier coefficients of the
second column of A and first column of S 0 B which determine the magnitude of the
numerators, and if t is smooth, these will be large only for small indices j and k.
Therefore,
away from the corners. Thus ~
C can be permuted to contain the large elements in
the upper left block, and any pivoting strategy that produces such a permutation will
give a suitable preconditioner for our scheme.
We have observed that if Gu's algorithm is applied to a matrix with this structure,
then C 1 will contain the four corner blocks. The interested reader is referred to [10]
for details on the complete pivoting strategy, but the key fact is that Gu makes his
pivoting decisions based on the size of elements in the generator -
corresponding
to the block that remains to be factored. The resulting Cauchy-like preconditioner
C 1 for the matrix C then has the properties that the first m singular values of the
preconditioned matrix are clustered, and that the invariant subspace corresponding
to small singular values of C is not much perturbed. Thus we expect that the initial
iterations of CGLS will produce a solution that is a good approximation to the noise-free
solution.
4. Algorithmic issues. Our algorithm is as follows:
Algorithm 1: Solving
1. Compute the generators for the matrix ~
using (13)
and (14).
2. Determine an index m to define the size of the partial factorization
of ~
C and factor ~
3. Set
4. Determine the m \Theta m leading principal submatrix, C 1 ; of C and
I
5. Compute an approximate solution ~
y to M using a
few steps of CGLS.
6. The approximate solution in the original coordinate system is
y.
When to stop the CGLS iteration in order to get the best approximate solution
is a well-studied but open question (for instance, see [16] and the references therein).
We do not solve this problem, but we consider the other algorithmic issues in the
following subsections.
4.1. Determining the size of C 1 . The choice of the parameter m determines
the number of clustered singular values in the preconditioned system. It influences
the amount of work per iteration, but perhaps more importantly, the mixing of signal
and noise subspaces. We use a simple heuristic in our numerical experiments. We
compute the Fourier Transform of the data vector - g and determine the index m for
which the Fourier coefficients start to level off. This is presumed to be the noise level,
and the factorization is truncated here.
4.2. Computing the preconditioner. Since ~
C satisfies the displacement equation
(3),
are the leading principal submatrices of P
T\Omega P and Q\ThetaQ T respec-
tively, and A 1 and B 1 contain the first m rows of P T ~
A and Q T ~
respectively.
Thus the matrix C
1 has entries
~
1-i;j-n
where ~
are the elements of \Theta
and\Omega that appear in \Theta 1
respectively
and, from (5), the vectors x T
are rows of X 1 and W 1 defined as
Computing X 1 and W 1 costs O(m 2 ) operations, given the factorization of C 1 and the
matrices A 1 and B 1 .
4.3. Applying the preconditioner. Let r be a vector of length m and assume
that no pivoting was done when ~
C was factored. Heinig [17] states that C \Gamma1
may
be written as
is the jth column of X 1 , (W 1 ) j is the jth column of W 1 , and C 0 is the
Cauchy matrix C
1-i;j-m
. The notation \Delta denotes the componentwise
product of two vectors.
Fast multiplication by the matrix C 0 requires finding the coefficients of a polynomial
whose roots are the elements of \Theta 1
[6], and this process can be unstable.
To avoid this difficulty, realizing that the elements of S \Gamma1 and S 1 are roots of unity,
we extend C 0 to a matrix of size n \Theta n satisfying the displacement equation (2) with
and we develop a mathematically equivalent algorithm for
computing
Algorithm 2: Forming
For do
1. Compute
2. Extend - r by zeros so that - r is of length n.
3. Set -
r.
4. Truncate - r to length m.
5.
The product
1 r can be computed similarly.
If pivoting was done during factorization, the vector - r should be multiplied by Q
after Step 2 and by P after Step 4.
This formulation allows C
1 r to be computed in O(n lg n) operations in a stable
manner, using an observation of Finck, Heinig, and Rost [6] that any Cauchy-like
matrix can be factored as
-0.4
index
value
rhs
-0.4
index
value
solution
Fig. 2. Uncontaminated data vector (left) and exact solution vector (right) for Example 1.
minimum achieved
rel. error at iter.
43
44 2:77
Table
Minimum relative errors achieved for various values of m, Example 1.
are the Vandermonde matrices whose second columns contain
the diagonal elements
of\Omega and \Theta, respectively. The matrix H is a Hankel matrix,
i.e., one in which elements on the antidiagonals are constant. The first row is equal
to the coefficients of the polynomial
for the leading one.
Since, from Property
3,\Omega and \Theta contain roots of unity, products of the matrix C 0
with a vector are very simple to compute:
has a single non-zero diagonal extending south-west to
north-east.
the normalized, discrete, inverse Fourier Transform matrix.
is the matrix product FS 0 , where the diagonal matrix S 0 is defined in
Property 3.
Thus products C 0 - r can be computed stably in O(n lg n) operations. Since at
most, the preconditioner can be applied to a vector in O(n lg n) operations, provided
one knows X and W . This is the same order as the number of operations to apply
C to a vector, since
the product of a Toeplitz matrix with a
vector can be computed in O(n lg n) operations by embedding the matrix in a circulant
matrix [2]. Thus, each iteration of CGLS costs O(n lg n) operations.
index
value
Fig. 3. Fourier coefficients of the noisy data for Example 1.
relative
error
unpreconditioned
Fig. 4. Relative error in computed solution for Example 1.
index
value
Fig. 5. Singular values of C (solid line) and M \Gamma1 C (\Theta's) for Example 1,
5. Numerical results. In this section we summarize results of our algorithm
on three test problems using Matlab and IEEE floating point double precision arith-
metic. Our measure of success in filtering noise is the relative error, the 2-norm of
the difference between the computed estimate f and the vector -
f corresponding to
zero noise, divided by the 2-norm of -
f . In each case, we apply the CGLS iteration
with Cauchy-like preconditioner of size m. The value corresponds to no
preconditioning.
5.1. Signal processing example. As mentioned in the introduction, Toeplitz
matrices often arise in the signal processing (1-dimensional image reconstruction prob-
lems). As an example, we consider the 100 \Theta 100 Toeplitz matrix T whose entries are
defined by
ae 4
where
and
This matrix is the one used in Example 4 of [2]. The authors note that such matrices
may occur in image restoration contexts as "prototype problems" and are used to
model certain degradations in the recorded image.
The condition number of T is approximately 2:4 \Theta 10 6 . We wish to solve the
equation denotes the noisy data vector for which kek 2 =k-gk 2 , the
noise level, is about 10 \Gamma3 . The uncorrupted data, - g, and exact numerical solution, -
are displayed in Figure 2 1 . The Fourier coefficients of g are shown in Figure 3. Using
these coefficients, an appropriate cutoff value m was determined as explained in x4.1.
The solid line in Figure 4 shows the convergence of CGLS on the unpreconditioned
Toeplitz system, where the ring on the line indicates the iteration at which the minimal
value of the relative error, 2:13 \Theta 10 \Gamma1 , was achieved. Convergence of CGLS on the
preconditioned system involving the Cauchy-like matrix is also shown in Figure 4 for
two different values of m. Table 1 gives an idea of the sensitivity of the algorithm
to the choice of m, with 43 being optimal in the sense of achieving minimal
relative error among all choices of preconditioner. The number of iterations for the
preconditioned system is substantially less than for the unpreconditioned.
The singular values of T and of the preconditioned matrix M \Gamma1 C for 43 are
shown in Figure 5. As predicted by the theory in Section 3.3, the first 43 singular
values of M \Gamma1 C are clustered very tightly around one and the smallest singular values
have been left virtually untouched.
5.2. Phillips test problem. Next we consider the discretized version of the
well-known first-kind Fredholm integral equation studied by D.L. Phillips [23]. The
We first determined -
f using Matlab's square function, -
then computed -
f.
index
value
rhs
index
value
solution
Fig. 6. Uncontaminated data vector (left) and exact solution vector (right) for Example 2.
index
value
index
value
Fig. 7. Fourier coefficients of the noisy data for Example 2, two different scales.
minimum achieved
rel. error at iter.
48 4:68 \Theta 10 \Gamma2 26
Table
Minimum relative errors achieved for various values of m, Example 2.
iteration
relative
error
unpreconditioned
Fig. 8. Relative error in computed solution for 2.
index
value
Fig. 9. Singular values of C (solid line) and M \Gamma1 C (\Theta's) for Example 2,
index
value
solution
-226index
value
rhs
Fig. 10. Uncontaminated data vector (left) and exact solution vector (right) for Example 3.
kernel of the integral equation is given by t(ff; defined by
and the limits of integration are -6 and 6. We used Hansen's Matlab Regularization
Toolbox, described in [15],to generate the corresponding 400 \Theta 400 symmetric Toeplitz
matrix whose condition number was approximately 1 \Theta 10 8 . In this code, the integral
equation is discretized by the Galerkin method with orthonormal box functions. The
uncorrupted data vector is shown in Figure 6 2 . The noise level was 1 \Theta 10 \Gamma2 for this
problem.
It was difficult to determine the appropriate cutoff value m, as Figure 7 indi-
cates, but Table 2 and Figure 8 show that the savings in the number of iterations to
convergence can be substantial. In addition, for several values of m, the minimum
relative error is somewhat lower than the minimum obtained for the unpreconditioned
problem. For example, after 293 iterations, CGLS on the unpreconditioned problem
achieved a minimum relative error of 5:71 \Theta 10 \Gamma2 . For however, a minimum
relative error of 3:05 \Theta 10 \Gamma2 was reached in only 9 iterations.
Figure
9 illustrates that, as in Example 1, the first m singular values of the
preconditioned matrix are clustered around one and the singular values corresponding
to the noise subspace remain almost unchanged.
5.3. Non-symmetric example. Finally, since both previous examples involve
symmetric Toeplitz matrices, for our third example we chose to work with the 100\Theta100
matrix T whose first column is defined by
\Theta
and whose first row is h
2 We set -
f was taken to be the exact numerical solution of the problem.
Fig. 11. Fourier coefficients of the noisy data for Example 3.
minimum achieved
rel. error at iter.
28
Table
Minimum relative errors achieved for various values of m, Example 3.
iteration
relative
error
unpreconditioned
Fig. 12. Relative error in computed solution for
Fig. 13. Singular values of C (solid line) and M \Gamma1 C (\Theta's) for Example 3,
with
The condition number of T is approximately 5:31 \Theta 10 11 , making it the worst
conditioned of the three matrices. We first defined the exact solution shown in Figure
. The uncorrupted data was obtained by calculating -
f , and is also shown
in
Figure
10. White noise was added to - g to obtain the noisy data whose Fourier
coefficients are shown in Figure 11, where the noise level was determined to be 1\Theta10 \Gamma3 .
As
Figure
12 indicates, the minimum relative error obtained with no preconditioning
was 2:13 \Theta 10 \Gamma1 in 76 iterations. For values of m close to 40, however, the
preconditioned system converges in fewer than 10 iterations to the same or better
minimum relative error. We also observe from Figure 13 that in addition to clustering
the first m singular values around one, preconditioning has the benefit of reducing
the condition number.
6. Conclusions. We have developed an efficient algorithm for computing regularized
solutions to ill-posed problems with Toeplitz structure. This algorithm makes
use of an orthogonal transformation to a Cauchy-like system and iterates using the
CGLS algorithm preconditioned by a rank-m partial factorization with pivoting. By
exploiting properties of the transformation, we showed that each iteration of CGLS
costs only O(n lg n) operations for a system of n variables.
Our theory predicts that for banded Toeplitz matrices we can expect the preconditioner
determined in the course of Gu's fast modified complete pivoting algorithm
to cluster the largest singular values of the preconditioned matrix around one, keep
the smallest singular values small, and not mix the signal and noise subspaces. Thus
CGLS produces a good approximate solution within a small number of iterations.
Our results illustrate the effectiveness of our preconditioner for an optimal value of
m, and for values in a neighborhood of the optimal one. Hence, our algorithm is both
efficient and practical.
Determining the optimal value of m can be difficult, and it appears better to
underestimate the value rather than to overestimate it. Advances in computing truly
rank-revealing factorizations of Cauchy-like matrices will yield corresponding advances
in our algorithm.
Similar ideas are valid for preconditioners of the form4 C 1
are both Cauchy-like. In practice, C 2 can be determined by computing
a partial factorization of the trailing submatrix of C, remaining after C 1 is
removed. This method saves time in the precomputation of M but more iterations
may be required for convergence.
In future work, we plan to study the use of Cauchy-like preconditioners for two
dimensional problems, in which T is block Toeplitz with Toeplitz blocks, and for other
matrices related to Cauchy-like matrices.
--R
Personal Communication.
Circulant preconditioned Toeplitz least squares itera- tions
An optimal circulant preconditioner for Toeplitz systems
A lookahead Levinson algorithm for general Toeplitz systems
Regularization by truncated total least squares
An fast algorithms for Cauchy- Vandermonde matrices
Equivalence of regularization and truncated iteration in the solution of ill-posed problems
Fast Gaussian elimination with partial pivoting of matrices with displacement structure
Theory of Tikhonov Regularization for Fredholm Equations of the First Kind
Stable and efficient algorithms for structured systems of linear equations
Regularization with differential operators
Preconditioned iterative regularization for ill-posed problems
The discrete Picard condition for discrete ill-posed problems
The use of the L-curve in the regularization of discrete ill-posed problems
Inversion of generalized cauchy matrices and other classes of structured matrices
Methods of conjugate gradients for solving linear systems
Topics in Matrix Analysis
Displacement ranks of matrices and linear equations
Solution of systems of linear equations by minimized iterations
A technique for the numerical solution of certain integral equations of the first kind
Matrix Perturbation Theory
A proposal for Toeplitz matrix calculations
analysis of a fast partial pivoting method for structured matrices
The use of pivoting to improve the numerical performance of Toeplitz matrix algorithms
The rate of convergence of conjugate gradients
Pitfalls in the numerical solution of linear ill-posed problems
--TR | cauchy-like;toeplitz;conjugate gradient;least squares;preconditioner;regularization;ill-posed problems |
333696 | Learning and Design of Principal Curves. | AbstractPrincipal curves have been defined as self-consistent smooth curves which pass through the middle of a d-dimensional probability distribution or data cloud. They give a summary of the data and also serve as an efficient feature extraction tool. We take a new approach by defining principal curves as continuous curves of a given length which minimize the expected squared distance between the curve and points of the space randomly chosen according to a given distribution. The new definition makes it possible to theoretically analyze principal curve learning from training data and it also leads to a new practical construction. Our theoretical learning scheme chooses a curve from a class of polygonal lines with $k$ segments and with a given total length to minimize the average squared distance over $n$ training points drawn independently. Convergence properties of this learning scheme are analyzed and a practical version of this theoretical algorithm is implemented. In each iteration of the algorithm, a new vertex is added to the polygonal line and the positions of the vertices are updated so that they minimize a penalized squared distance criterion. Simulation results demonstrate that the new algorithm compares favorably with previous methods, both in terms of performance and computational complexity, and is more robust to varying data models. | Introduction
Principal component analysis is perhaps the best-known technique in multivariate analysis
and is used in dimension reduction, feature extraction, and in image coding and enhancement.
Consider a d-dimensional random vector moments.
The first principal component line for X is a straight line which has the property that the
expected value of the squared Euclidean distance from X to the first principal component
line is minimum among all straight lines. This property makes the first principal component
a concise one-dimensional approximation to the distribution of X, and the projection of
X to this line gives the best linear summary of the data. For elliptical distributions the
first principal component is also self consistent, i.e., any point of the line is the conditional
expectation of X over those points of the space which project to this point.
Hastie [1] and Hastie and Stuetzle [2] (hereafter HS) generalized the self consistency
property of principal components and introduced the notion of principal curves. Let
be a smooth (infinitely differentiable) curve in R d parametrized by t 2 R,
and for any x 2 R d let t f
(x) denote the parameter value t for which the distance between x
and f(t) is minimized (see Figure 1). More formally, the projection index t f (x) is defined by
denotes the Euclidean norm in R d .
Figure
1: Projecting points to a curve.
By the HS definition, the smooth curve f(t) is a principal curve if
f does not intersect itself
(ii) f has finite length inside any finite ball of R d
(iii) f is self-consistent, i.e.,
Intuitively speaking, self-consistency means that each point of f is the average (under the
distribution of X) of points that project there. Thus, principal curves are smooth self-consistent
curves which pass through the "middle" of the distribution and provide a good
one-dimensional nonlinear summary of the data.
Based on the self consistency property, HS developed an algorithm for constructing principal
curves. Similar in spirit to the Generalized Lloyd Algorithm (GLA) of vector quantizer
design [3], the HS algorithm iterates between a projection step and an expectation step.
When the probability density of X is known, the HS principal algorithm for constructing
principal curves is the following.
be the first principal component line for X. Set
\Psi for all x 2 R d .
Step 4 Evaluate the expected squared distance \Delta(f (j)
and f (j) . Stop if \Delta(f (j) ) is less than a certain threshold. Otherwise, increase j by 1
and go to Step 1.
In practice, the distribution of X is often unknown, but a data set consisting of n samples
of the underlying distribution is known instead. In the HS algorithm for data sets, the
expectation in Step 1 is replaced by a smoother (locally weighted running lines [4]) or a
nonparametric regression estimate (cubic smoothing splines). HS provide simulation examples
to illustrate the behavior of the algorithm, and describe an application in the Stanford
Linear Collider Project.
Alternative definitions and methods for estimating principal curves have been given subsequent
to Hastie and Stuetzle's groundbreaking work. Banfield and Raftery [5] modeled
the outlines of ice floes in satellite images by closed principal curves and they developed a
robust method which reduces the bias and variance in the estimation process. Their method
of clustering about principal curves led to a fully automatic method for identifying ice floes
and their outlines. On the theoretical side, Tibshirani [6] introduced a semiparametric model
for principal curves and proposed a method for estimating principal curves using the EM
algorithm. Close connections between principal curves and Kohonen's self-organizing maps
were pointed out by Mulier and Cherkassky [7]. Recently, Delicado [8] proposed yet another
definition based on a property of the first principal components of multivariate normal
distributions.
There remains an unsatisfactory aspect of the definition of principal curves in the original
HS paper as well as in subsequent works. Although principal curves have been defined to
be nonparametric, their existence for a given distribution or probability density is an open
question, except for very special cases such as elliptical distributions. This also makes it
very difficult to theoretically analyze any estimation scheme for principal curves.
In this paper we propose a new definition of principal curves to resolve this problem.
In the new definition, a principal curve is a continuous curve of a given length L which
minimizes the expected squared distance between X and the curve. In Section 2 (Lemma 1)
we prove that for any X with finite second moments there always exists a principal curve
in the new sense. We also discuss connections between the newly defined principal curves
and optimal vector quantizers. Then we propose a theoretical learning scheme in which the
model classes are polygonal lines with k-segments and with a given length, and the algorithm
chooses a curve from this class which minimizes the average squared distance over n training
points. In Theorem 1 we prove that with k suitably chosen as a function of n, the expected
squared distance of the curve trained on n data points converges to the expected squared
distance of the principal curve at a rate O(n 1=3 ) as n !1.
Two main features distinguish this learning scheme from the HS algorithm. First, the
polygonal line estimate of the principal curve is determined via minimizing a data dependent
criterion directly related to the definition of principal curves. This facilitates the theoretical
analysis of the performance. Second, the complexity of the resulting polygonal line is
determined by the number of segments k, which is typically much less than n for the optimal
choice of k. 1 This agrees with our mental image that principal curves should provide a
concise summary of the data. On the other hand, for data points the HS algorithm with
scatterplot smoothing produces polygonal lines with n segments.
Though amenable to analysis, our theoretical algorithm is computationally burdensome
for implementation. In Section 3 we develop a suboptimal algorithm for learning principal
curves. The practical algorithm produces polygonal line approximations to the principal
curve just as the theoretical method does, but global optimization is replaced by a less
complex iterative descent method. In Section 4 we give simulation results and compare our
algorithm with previous work. In general, on examples considered by HS, the performance
of the new algorithm is comparable with the HS algorithm, while it proves to be more robust
to changes in the data generating model.
We note here that the choice of k can be made automatic in principle by using the method of structural
risk minimization [9].
Learning Principal Curves with a Length Constraint
A curve in d-dimensional Euclidean space is a continuous function f : I ! R d , where I is
a closed interval of the real line. Let the expected squared distance between X and f be
defined by
\Theta inf
where the projection index t f
(x) is given in (1). Let f be a smooth (infinitely differentiable)
curve and for - 2 R consider the perturbation f + -g of f by a smooth curve g such that
proved that f is a principal curve if and only if f
is a critical point of the distance function in the sense that for all such g,
@-
0:
It is not hard to see that an analogous result holds for principal component lines if the
perturbation g is a straight line. In this sense the HS principal curve definition is a natural
generalization of principal components. Also, it is easy to check that principal components
are in fact principal curves if the distribution of X is elliptical.
An unfortunate property of the HS definition is that in general it is not known if principal
curves exists for a given source density. To resolve this problem we go back to the defining
property of the first principal component. A straight line s(t) is the first principal component
if and only if
\Theta min t
\Theta min t
for any other straight line - s(t). We wish to generalize this property of the first principal
component and define principal curves so that they minimize the expected squared distance
over a class of curves rather than only being critical points of the distance function. To do
this it is necessary to constrain the length 2 of the curve, since otherwise for any X with
a density and any ffl ? 0 there exists a smooth curve f such that \Delta(f ) - ffl, and thus a
minimizing f has infinite length. On the other hand, if the distribution of X is concentrated
on a polygonal line and is uniform there, the infimum of the distances \Delta(f ) is 0 over the
class of smooth curves, but no smooth curve can achieve this infimum. For this reason, we
2 For the definition of length for nondifferentiable curves see Appendix A where some basic facts concerning
curves in R d have been collected from [10].
relax the requirement that f should be differentiable but instead we constrain the length of
f . Note that by the definition of curves, f is still continuous. We give the following new
definition of principal curves.
curve f is called a principal curve of length L for X if f minimizes \Delta(f )
over all curves of length less than or equal to L.
A useful advantage of the new definition is that principal curves of length L always exist
if X has finite second moments, as the next result shows.
Assume that EkXk 2 ! 1. Then for any L ? 0 there exists a curve f with
\Delta(f
The proof of the lemma is given in Appendix A.
Note that we have dropped the requirement of the HS definition that principal curves
be non-intersecting. In fact, Lemma 1 would not hold for non-intersecting curves of length
L without further restricting the distribution of X, since there are distributions for which
the minimum of \Delta(f ) is achieved only by an intersecting curve even though non-intersecting
curves can arbitrarily approach this minimum.
Remark: Connection with vector quantization Our new definition of principal curves
has been inspired by the notion of an optimal vector quantizer. The points y
are called the codepoints of an optimal k-point vector quantizer if
\Theta min
\Theta min
for any other collection of k points y In other words, the points y
k give
the best k-point representation of X in the mean squared sense. Optimal vector quantizers
are of great interest in lossy data compression, speech and image coding [11], and clustering
[12]. There is a strong connection between the definition of optimal vector quantizers and our
definition of a principal curve. Both minimize the same expected squared distance criterion,
while the vector quantizer is constrained to have at most k points, and we constrain the
length of a principal curve. This connection is further illuminated by a recent work of
Tarpey et al. [13] who define k points y to be self consistent if
are the Voronoi regions associated with y
kg. Thus our principal curves correspond to optimal vector quantizers
("principal points" by the terminology of [13]) while the HS principal curves correspond to
self consistent points.
While principal curves of a given length always exist, it appears difficult to demonstrate
concrete examples, unless the distribution of X is discrete or it is concentrated on a curve.
The same problem occurs in the theory of optimal vector quantizers, where except for the
scalar case the structure of optimal quantizers is unknown for even the most common
multivariate densities (e.g., see [11]).
Suppose now that n independent copies of X are given. These are called the
training data and they are assumed to be independent of X. The goal is to use the training
data to construct a curve of length at most L whose expected squared loss is close to that
of a principal curve for X.
Our method is based on a common model in statistical learning theory (e.g., see [9]).
We consider classes of curves increasing complexity. Given n data points
drawn independently from the distribution of X, we choose a curve as the estimator of the
principal curve from the kth model class S k by minimizing the empirical error. By choosing
the complexity of the model class appropriately as the size of the training data grows, the
chosen curve represents the principal curve with increasing accuracy.
We assume that the distribution of X is concentrated on a closed and bounded convex
set K ae R d . A basic property of convex sets in R d shows that there exists a principal curve
of length L inside K (see Lemma 2 in Appendix A), and so we will only consider curves in
K.
Let S denote the family of curves taking values in K and having length not greater than
L. For k - 1 let S k be the set of polygonal curves (broken lines) in K which have k segments
and whose lengths do not exceed L. Note that S k ae S for all k. Let
denote the squared distance between a point x 2 R d and the curve f . For any f 2 S the
empirical squared error of f on the training data is the sample average
where we have suppressed in the notation the dependence of \Delta n (f) on the training data. Let
our theoretical algorithm choose an f k;n 2 S k which minimizes the empirical error, i.e,
We measure the efficiency of f k;n in estimating f by the difference J(f k;n ) between the
expected squared loss of f k;n and the optimal expected squared loss achieved by f , i.e., we
let
\Delta(f
Our main result in this section proves that as the number
of data points n tends to infinity, and k is chosen to be proportional to n 1=3 , then J(f k;n )
tends to zero at a rate J(f k;n
Theorem 1 Assume that PfX 2 bounded and closed convex set K, let n
be the number of training points, and let k be chosen to be proportional to n 1=3 . Then the
expected squared loss of the empirically optimal polygonal line with k segments and length at
most L converges, as n !1, to the squared loss of the principal curve of length L at a rate
The proof of the theorem is given in Appendix B. To establish the result we use techniques
from statistical learning theory (e.g., see [14]). First, the approximating capability of the
class of curves S k is considered, and then the estimation (generalization) error is bounded via
covering the class of curves S k with ffl accuracy (in the squared distance sense) by a discrete
set of curves. When these two bounds are combined one obtains
r
where the term C(L; D; d) depends only on the dimension d, the length L, and the diameter
D of the support of X, but is independent of k and n. The two error terms are balanced by
choosing k to be proportional to n 1=3 which gives the convergence rate of Theorem 1.
Note that although the constant hidden in the O notation depends on the dimension d,
the exponent of n is dimension-free. This is not surprising in view of the fact that the class of
curves S is equivalent in a certain sense to the class of Lipschitz functions f
that Appendix A). It is known that the ffl-entropy, defined by the
logarithm of the ffl covering number, is roughly proportional to 1=ffl for such function classes
[15]. Using this result, the convergence rate O(n \Gamma1=3 ) can be obtained by considering ffl-covers
of S directly (without using the model classes S k ) and picking the empirically optimal curve
in this cover. However, the use of the classes S k has the advantage that they are directly
related to the practical implementation of the algorithm given in the next section.
3 A Polygonal Line Algorithm
Given a set of data points X the task of finding a polygonal line with
k segments and length L which minimizes 1
computationally difficult. We
propose a suboptimal method with reasonable complexity. The basic idea is to start with
a straight line segment f 1;n in each iteration of the algorithm we increase the
number of segments k by adding a new vertex to the polygonal line f k;n produced by the
previous iteration. After adding a new vertex, the positions of all vertices are updated in an
inner loop.
(a)
-0.4
(b)
-0.4
(c)
-0.4
(d)
-0.4
Figure
2: The curves f k;n produced by the polygonal line algorithm for data points. The
data was generated by adding independent Gaussian errors to both coordinates of a point chosen
randomly on a half circle. (a) f 1;n , (b) f 2;n , (c) f 4;n , (d) f 11;n (the output of the algorithm).
The inner loop consists of a projection step and an optimization step. In the projection
step the data points are partitioned into "nearest neighbor regions" according to which
segment or vertex they project. In the optimization step the new position of a vertex v i is
determined by minimizing an average squared distance criterion penalized by a measure of
the local curvature, while all other vertices are kept fixed. These two steps are iterated so
that the optimization step is applied to each vertex v i , in a cyclic fashion
(so that after v k+1 , the procedure starts again with v 1 ), until convergence is achieved and
f k;n is produced. Then a new vertex is added.
The algorithm stops when k exceeds a threshold c(n; \Delta). This stopping criterion is based
on a heuristic model complexity measure, determined by the number segments k, the number
of data points n, and the average squared distance \Delta n (f k;n ).
The flow-chart of the algorithm is given in Figure 3. The evolution of the curve produced
by the algorithm is illustrated in Figure 2. We note here that the objective function to be
minimized in the vertex optimization procedure is based partly on heuristic considerations.
As explained in Section 3.3, the algorithm in this step searches for a (local) minimum of a
the average squared distance penalized by the local curvature. The heuristic lies in the data
dependent form of the penalty factor. Similarly to the HS algorithm, we have no formal
proof that the practical algorithm will converge, but in practice, after extensive testing, it is
observed to converge.
Convergence?
Y
Y
END
Projection
Initialization
Add new vertex
Vertex optimization
Figure
3: The flow chart of the polygonal line algorithm.
3.1 The Initialization Step
To obtain f 1;n , we take the shortest segment of the first principal component line which
contains all of the projected data points. To keep the computational complexity low, we
compute the first principal component of a constant number of points randomly chosen from
the n data points. This choice suffices for our purposes since the algorithm needs only a
reasonable approximation of the first principal component.
3.2 The Projection Step
Let f denote a polygonal line with vertices closed line segments s
such that s i connects vertices v i and v i+1 . In this step the data set X n is partitioned into
(at the nearest neighbor regions of
the vertices and segments of f , in the following manner. For any x 2 R d let \Delta(x; s i ) be the
squared distance from x to s i (see definition (3)), and let \Delta(x; . Then we let
Upon setting
are defined by
The resulting partition is illustrated in Figure 4.001101
s
s
s
Figure
4: The nearest neighbor partition of R 2 induced by the vertices and segments of f .
3.3 The Vertex Optimization Step
In this step the new position of a vertex v i is determined. In the theoretical algorithm
the average squared distance \Delta n (x; f) is minimized subject to the constraint that f is a
polygonal line with k segments and length not exceeding L. One could use a Lagrangian
formulation and attempt to find a new position for v i (while all other vertices are fixed) such
that the penalized squared error \Delta is minimum. However, we have observed
that this approach is very sensitive to the choice of -. On the other hand, most principal
curve applications require a smooth curve solution. To avoid over-fitting, HS used scatterplot
or spline smoothing. We chose to penalize the local curvature to obtain smoother curves.
Due to the fact that only one vertex is moved at a time, penalizing the curvature will also
implicitly penalize the length of the curve. After considering several possibilities, we found
that the following measures of local curvature work especially well. At inner vertices v i
we penalize the sum of the cosines of the three angles at vertices v
At the endpoints and at their immediate neighbors (v i , 1), the
penalty on a nonexistent angle is replaced by the squared length of the first (or last) segment.
Formally, let fl i denote the angle at vertex v i , let -(v
and let . Then the penalty P (v i ) at vertex v i is given by
The local measure of the average squared distance is calculated from the data points which
project to v i or to the line segment(s) starting at v i (see Projection Step). Accordingly, let
define the local average squared distance as a function of v i by
We use a gradient (steepest descent) method to minimize
This part of the algorithm is modular, i.e., the simple procedure we are
using can be substituted with a more sophisticated nonlinear programming procedure at the
expense of increased computational complexity.
One important issue is the amount of smoothing required for a given data set. In the HS
algorithm one needs to determine the penalty coefficient of the spline smoother, or the span
of the scatterplot smoother. In our algorithm, the corresponding parameter is the curvature
penalty factor - p . If some a priori knowledge about the distribution is available, one can
use it to determine the smoothing parameter. However in the absence of such knowledge,
the coefficient should be data-dependent. Intuitively, - p should increase with the number
of segments and the size of the average squared error, and it should decrease with the data
size. Based on heuristic considerations and after carrying out practical experiments, we set
p is a parameter of the algorithm.
3.4 Adding a New Vertex
We start with the optimized f k;n and choose the segment that has the largest number of
data points projecting to it. The midpoint of this segment is selected as the new vertex.
Formally, let j. Then the new vertex is v new
Stopping Condition
According to the theoretical results of Section 2, the number of segments k should be proportional
to n 1=3 to achieve the O(n 1=3 ) convergence rate for the expected squared dis-
tance. Though the theoretical bounds are not tight enough to determine the optimal number
of segments for a given data size, we found that k - n 1=3 also works in practice.
To achieve robustness we need to make k sensitive to the average squared distance. The
stopping condition blends these two considerations. The algorithm stops when k exceeds
3.6 Computational Complexity
The complexity of the inner loop is dominated by the complexity of the projection step, which
is O(nk). Increasing the number of segments by one at a time (as described in Section 3.4),
the complexity of the algorithm to obtain f k;n is O(nk 2 ). Using the stopping condition of
Section 3.5, the computational complexity of the algorithm becomes O(n 5=6 ). This is slightly
better than the O(n 2 ) complexity of the HS algorithm.
The complexity can be dramatically decreased in certain situations. One possibility is to
add more than one vertex at a time. For example, if instead of adding only one vertex, a new
vertex is placed at the midpoint of every segment, then we can reduce the computational
complexity for producing f k;n to O(nk log k). One can also set k to be a constant if the
data size is large, since increasing k beyond a certain threshold brings only diminishing
returns. These simplifications work well in certain situations, but the original algorithm is
more robust.
4 Experimental Results
We have extensively tested our algorithm on two-dimensional data sets. In most experiments
the data was generated by a commonly used (see, e.g., [2] [6] [7]) additive model
where Y is uniformly distributed on a smooth planar curve (hereafter called the generating
curve) and e is bivariate additive noise which is independent of Y.
Since the "true" principal curve is not known (note that the generating curve in the model
e is in general not a principal curve either in the HS sense or in our definition),
it is hard to give an objective measure of performance. For this reason, in what follows, the
performance is judged subjectively, mainly on the basis of how closely the resulting curve
follows the shape of the generating curve.
In general, in simulation examples considered by HS the performance of the new algorithm
is comparable with the HS algorithm. Due to the data-dependence of the curvature penalty
factor and the stopping condition, our algorithm turns out to be more robust to alterations
in the data generating model, as well as to changes in the parameters of the particular model.
We use varying generating shapes, noise parameters, and data sizes to demonstrate the
robustness of the polygonal line algorithm. All plots show the generating curve (Generator
Curve), the curve produced by our polygonal line algorithm (Principal Curve), and the curve
produced by the HS algorithm with spline smoothing (HS Principal Curve), which we have
found to perform better than the HS algorithm using scatterplot smoothing. For closed
generating curves we also include the curve produced by the Banfield and Raftery (BR)
algorithm [5], which extends the HS algorithm to closed curves (BR Principal Curve). The
two coefficients of the polygonal line algorithm are set in all experiments to the constant
values
plots have been normalized to fit in a 2 \Theta 2 square. The
parameters given below refer to values before this normalization.
In
Figure
5 the generating curve is a circle of radius
bivariate uncorrelated Gaussian with variance E(e 2
2. The performance of
the three algorithms (HS, BR, and the polygonal line algorithm) is comparable, although
the HS algorithm exhibits more bias than the other two. Note that the BR algorithm [5] has
been tailored to fit closed curves and to reduce the model bias. In Figure 6, only half of the
circle is used as a generating curve and the other parameters remain the same. Here, too,
both the HS and our algorithm behave similarly.
When we depart from these usual settings the polygonal line algorithm exhibits better
behavior than the HS algorithm. In Figure 7(a) the data set of Figure 6 was linearly transformed
using the matrix ( 0:6 0:6
\Gamma1:0 1:2 ). In Figure 7(b) the transformation
1:0 \Gamma0:2
was used.
The original data set was generated by an S-shaped generating curve, consisting of two half
circles of unit radii, to which the same Gaussian noise was added as in Figure 6. In both
cases the polygonal line algorithm produces curves that fit the generator curve more closely.
This is especially noticeable in Figure 7(a) where the HS principal curve fails to follow the
shape of the distorted half circle.
There are two situations when we expect our algorithm to perform particularly well. If the
distribution is concentrated on a curve, then according to both the HS and our definitions the
principal curve is the generating curve itself. Thus, if the noise variance is small, we expect
both algorithms to very closely approximate the generating curve. The data in Figure 8(a)
was generated using the same additive Gaussian model as in Figure 5, but the noise variance
was reduced to E(e 2
2. In this case we found that the polygonal line
algorithm outperformed both the HS and the BR algorithms
The second case is when the sample size is large. Although the generating curve is not
necessarily the principal curve of the distribution, it is natural to expect the algorithm to
well approximate the generating curve as the sample size grows. Such a case is shown in
Figure
8(b), where data points were generated (but only a small subset of these
was actually plotted). Here the polygonal line algorithm approximates the generating curve
with much better accuracy than the HS algorithm.
5 Conclusion
A new definition of principal curves has been offered. The new definition has significant
theoretical appeal; the existence of principal curves under this definition can be proved
under very general conditions, and a learning method for constructing principal curves for
finite data sets yields to theoretically analysis.
Inspired by the new definition and the theoretical learning scheme, we have introduced
a new practical polygonal line algorithm for designing principal curves. Lacking theoretical
results concerning both the HS and our polygonal line algorithm, we compared the two
methods through simulations. We have found that in general our algorithm has performance
either comparable with the performance of the original HS algorithm and it exhibits better,
more robust behavior when the data generating model is varied. It should be mentioned that
these findings cannot be called entirely conclusive due mainly to the absence of an objective
performance measure. In practical applications, each method can have different advantages.
In this respect, we believe that the new principal curve algorithm may prove useful where a
compact and accurate description of a pattern or an image is required, e.g., in skeletonization
of handwritten characters or in feature extraction. These are issues for future work.
Appendix
A
Curves in R d
continuous mapping (curve). The length of f over an interval
denoted by l(f ; ff; fi), is defined by
where the supremum is taken over all finite partitions of [ff; fi] with arbitrary subdivision
points 1. The length of f over its entire domain
[a; b] is denoted by l(f ). If l(f) ! 1, then f is said to be rectifiable. It is well known
that rectifiable iff each coordinate function R is of bounded
variation.
Two curves are said to be equivalent if there exist two
nondecreasing continuous real functions OE
In this case we write f - g, and it is easy to see that - is an equivalence relation. If
curve g over [a; b] is said to be parametrized by its arc length if
a for any a - t - b. Let f be a curve over [a; b] with length L. It is not hard
to see that there exists a unique arc length parametrized curve g over [0; L] such that f - g.
Let f be any curve with length L 0 - L, and consider the arc length parametrized curve
- f with parameter interval [0; L 0 ]. By definition (A.1), for all s
g, and -
satisfies
the following Lipschitz condition: For all
On the other hand, note that if -
g is a curve over [0; 1] which satisfies the Lipschitz condition
(A.2), then its length is at most L.
Let f be a curve over [a; b] and denote the squared Euclidean distance from any x 2 R d
to f by
a-t-b
Note that if l(f) ! 1, then by the continuity of f , its graph
is a compact subset of R d , and the infimum above is achieved for some t. Also, since G f
if f - g, we also have that \Delta(x; f) = \Delta(x; g) for all g - f .
Proof of Lemma 1 Define
First we show that the above infimum does not change if we add the restriction that all f
lie inside a closed sphere of large enough radius r and centered at
the origin. Indeed, without excluding nontrivial cases, we can assume that \Delta ! EkXk 2 .
Denote the distribution of X by - and choose r ? 3L large enough such that
Z
for some ffl ? 0. If f is such that G f
is not entirely contained in S(r), then for all x 2 S(r=3)
we have \Delta(x; f) ? kxk 2 since the diameter of G f is at most L. Then (A.3) implies that
\Delta(f
Z
and thus
ae S(r)g: (A.4)
In view of (A.4) there exists a sequence of curves ff n g such that l(f n
ae S(r)
for all n, and \Delta(f n . By the discussion preceding (A.2), we can assume without loss
of generality that all f n are defined over [0; 1] and
Consider the set of all curves C over [0; 1] such that f 2 C iff kf(t 1
ae S(r). It is easy to see that C is a closed
set under the uniform metric d(f ; Also, C is an equicontinuous
family of functions and sup t kf(t)k is uniformly bounded over C. Thus C is a compact metric
space by the Arzela-Ascoli theorem (see, e.g., [16]). Since f n 2 C for all n, it follows that
there exists a subsequence f n k
converging uniformly to an f 2 C.
To simplify the notation let us rename ff n k
g as ff n g. Fix x 2 R d , assume \Delta(x; f n
\Delta(x; f ), and let t x be such that \Delta(x; f Then by the triangle inequality,
)k:
By symmetry, a similar inequality holds if \Delta(x; f n
EkXk 2 is finite, there exists A ? 0 such that
and therefore
Since the Lipschitz condition on f guarantees that l(f ) - L, the proof is complete.
Assume that PfX 2 closed and convex set K, and let f be a curve
with l(f) - L. Then there exists a curve - f such that G - f
ae K, l( - f) - L, and
Proof. For each t in the domain of f , let - f(t) be the unique point in K such that
It is well known that - f(t) satisfies
where h\Delta; \Deltai denotes the usual inner product in R d (see, e.g, [17]). Then for all
where the inequality follows from (A.6) since - f(t 1 continuous (it is
a curve) and similar inequality shows that for all t and x 2 K,
so that \Delta( - f
Appendix
Proof of Theorem 1. Let f
k denote the curve in S k minimizing the squared loss, i.e.,
f
\Delta(f
The existence of a minimizing f
k can easily be shown using a simpler version of the proof of
Lemma 1. Then J(f k;n ) can be decomposed as
where, using standard terminology, \Delta(f k;n
k ) is called the estimation error and \Delta(f
\Delta(f ) is called the approximation error. We consider these terms separately first, and then
choose k as a function of the training data size n to balance the obtained upper bounds in
an asymptotically optimal way.
Approximation
For any two curves f and g of finite length define their (nonsymmetric) distance by
min
s
Note that ae( - f ; -
- g, i.e., ae(f ; g) is independent of the particular
choice of the parametrization within equivalence classes. Next we observe that if the diameter
of K is D, and G f ; G g 2 K, then for all x 2 K,
and therefore
To prove (B.1), let x 2 K and choose t 0 and s 0 such that \Delta(x; f)
be an arbitrary arc length parametrized curve over [0; L 0 ], where L 0 - L.
as a polygonal curve with vertices
some s, we have
min
s
Note that l(g) - L 0 , by construction, and thus g 2 S k . Thus for every f 2 S there exists a
such that ae(f ; g) - L=(2k). Now let g 2 S k be such that ae(f ; g) - L=(2k). Then by
(B.2) we conclude that the approximation error is upper bounded as
\Delta(f
Estimation
For each ffl ? 0 and k - 1 let S k;ffl be a finite set of curves in K which form an ffl-cover of
S k in the following sense. For any f 2 S k there is an f 0 2 S k;ffl which satisfies
sup
The explicit construction of S k;ffl is given in Appendix C. Since f k;n 2 S k (see (5)), there
exists an f 0
We introduce the
compact notation X for the training data. Thus we can write
where (B.5) follows from the approximating property of f 0
k;n and the fact that the distribution
of X is concentrated on K. (B.6) holds because f k;n minimizes \Delta n (f) over all f 2 S k , and
is an ordinary expectation
of the type E [\Delta(X; f )], f 2 S k;ffl . Thus for any t ? 2ffl the union bound implies
where jS k;ffl j denotes the cardinality of S k;ffl .
Recall now Hoeffding's inequality [18] which states that if Y are independent
and identically distributed real random variables such that 0 - Y i - A with probability one,
then for all u ? 0,
Since the diameter of K is D, we have such that
K. Thus 0 - \Delta(X; f) - D 2 with probability one and by Hoeffding's inequality for all
we have
which implies by (B.8) that
for any t ? 2ffl. Using the identity
nonnegative random
variable Y , we can write for any u ? 0,
\Delta(f k;n
Z 1P
dt
r
where (B.10) follows from the inequality
follows by setting
, where log denotes natural logarithm. The following
lemma, which is proved in Appendix C, demonstrates the existence of a suitable covering
set S k;ffl .
Lemma 3 For any ffl ? 0 there exists a finite collection of curves S k;ffl in K such that
sup
and
d
d
d
LD
d
d
where V d is the volume of the d-dimensional unit sphere and D is the diameter of K.
It is not hard to see that setting gives the upper bound
log(jS
where C(L; D; d) does not depend on k. Combining this with (B.11) and the approximation
bound given by (B.3) results in
\Delta(f k;n
r
The rate at which \Delta(f k;n ) approaches \Delta(f ) is optimized by setting the number of segments
k to be proportional to n 1=3 . With this choice J(f k;n has the asymptotic
convergence rate
and the proof of Theorem 1 is complete.
Appendix
Proof of Lemma 3 Consider a rectangular grid with side length ffi ? 0 in R d . With each
point y of this grid associate its Voronoi region (a hypercube of side length ffi), defined as the
set of points which are closer to y than to any other points of the grid. Let K ffi ae K denote
the collection of points of this grid which fall in K plus the projections of those points of the
grid to K whose Voronoi regions have a nonempty intersections with K. Then we clearly
have
min
dffi: (C.1)
d) and define S k;ffl to be the family of all polygonal curves - f having k
vertices
satisfying the length constraint
dffi: (C.2)
To see that S k;ffl has the desired covering property, let f 2 S k be arbitrary with vertices
f be the
polygonal curve with vertices -
by the definition of S k ,
the triangle inequality implies that - f satisfies (C.2) and thus - f 2 S k;ffl . On the other hand,
without loss of generality assume that the line segment connecting y i\Gamma1 and y i and the line
segment connecting -
y
y i are both linearly parametrized over [0; 1]. Then
dffi:
This shows that
dffi=2. Then it follows from (B.1) that S k;ffl is an
ffl-cover for S k since for all x 2 K,
denote the length of the ith segment of - f and let
where dxe denotes the least integer not less than x. Fix the sequence -
define S
as the set of all - f 2 S k;ffl whose segment lengths generate this particular
sequence. To bound jS k;ffl ( -
note that the first vertex -
y 0 of an - f 2 S k;ffl ( -
L can be any of
the points in K ffi which contains as many points as there are Voronoi cells intersecting K.
Since the diameter of K is D, there exists a sphere of radius D
which contains these
Voronoi cells. Thus the cardinality of K ffi can be upper bounded as
where V d is the volume of the unit sphere in R d . Assume -
y has been
chosen. Since k- y
there are no more than
possibilities for choosing -
y i . Therefore,
d
Y
By (C.2) and the definition of the -
Therefore the arithmetic-geometric mean inequality implies that
d
Y
and thus
d
d
On the other hand, by (C.3) we have
+2k and therefore the number of distinct
sequences -
L kis upper bounded by
Substituting
d) we obtain
d
d
d
LD
d
d
--R
"Principal curves and surfaces."
"Principal curves,"
"An algorithm for vector quantizer design,"
"Robust locally weighted regression and smoothing scatterplots,"
"Ice floe identification in satellite images using mathematical morphology and clustering about principal curves,"
"Principal curves revisited,"
"Self-organization as an iterative kernel smoothing pro- cess,"
"Another look at principal curves and surfaces."
The Nature of Statistical Learning Theory.
Introductory Real Analysis.
Clustering Algorithms.
"Principal points and self-consistent points of elliptical distributions,"
A Probabilistic Theory of Pattern Recognition.
"ffl-entropy and ffl-capacity of sets in function spaces,"
Real analysis and probability.
Optimization by vector space methods.
"Probability inequalities for sums of bounded random variables,"
--TR
--CTR
Balzs Kgl , Adam Krzyak, Piecewise Linear Skeletonization Using Principal Curves, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.1, p.59-74, January 2002
Peter Meinicke , Stefan Klanke , Roland Memisevic , Helge Ritter, Principal Surfaces from Unsupervised Kernel Regression, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.9, p.1379-1391, September 2005
B. Bhushan , J. A. Romagnoli, A strategy for feature extraction of high dimensional noisy data, Proceedings of the 25th IASTED international conference on Modeling, indentification, and control, p.441-445, February 06-08, 2006, Lanzarote, Spain
Zhiguo Cheng , Mang Chen , Yuncai Liu, A robust algorithm for image principal curve detection, Pattern Recognition Letters, v.25 n.11, p.1303-1313, August 2004
J. J. Verbeek , N. Vlassis , B. Krse, A k-segments algorithm for finding principal curves, Pattern Recognition Letters, v.23 n.8, p.1009-1017, June 2002
Jos Koetsier , Ying Han , Colin Fyfe, Twinned principal curves, Neural Networks, v.17 n.3, p.399-409, April 2004
B. S. Y. Lam , H. Yan, A curve tracing algorithm using level set based affine transform, Pattern Recognition Letters, v.28 n.2, p.181-196, January, 2007
Hujun Yin, Data visualisation and manifold mapping using the ViSOM, Neural Networks, v.15 n.8-9, p.1005-1016, October 2002
Jochen Einbeck , Gerhard Tutz , Ludger Evers, Local principal curves, Statistics and Computing, v.15 n.4, p.301-313, October 2005
Alexander J. Smola , Sebastian Mika , Bernhard Schlkopf , Robert C. Williamson, Regularized principal manifolds, The Journal of Machine Learning Research, 1, p.179-209, 9/1/2001
Kui-Yu Chang , J. Ghosh, A Unified Model for Probabilistic Principal Surfaces, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.1, p.22-41, January 2001 | curve fitting;feature extraction;learning systems;unsupervised learning;piecewise linear approximation;vector quantization |
333779 | Derivation of Numerical Methods Using Computer Algebra. | The use of computer algebra systems in a course on scientific computation is demonstrated. Various examples, such as the derivation of Newton's iteration formula, the secant method, Newton--Cotes and Gaussian integration formulas, as well as Runge--Kutta formulas, are presented. For the derivations, the computer algebra system Maple is used. | Introduction
At ETH Z-urich we have redesigned our former courses on numerical analysis. We do not only run
numerical programs but also introduce the students to computer algebra and make heavy use of
computer algebra systems both in the lectures and the assignments.
Computer algebra may be used to generate numerical algorithms, to compute discretization errors,
to simplify proofs, etc., but also to run examples and to generate plots.
We claim, that it is easier for students to follow a derivation which is carried out with the help of a
computer algebra system than by hand. Computer algebra systems take over the hard hand work
such as e.g. solving systems of equations. Students do not need to be concerned with all the details
(and all the small glitches) of a manual derivation and can understand and keep the overview over
the general steps of the derivation. A computer supported derivation is also more convincing than
a presentation of the bare results without any reasoning.
Moreover, using computer algebra systems rather complex numerical formulas can be derived, far
more complex than what can be done in class by hand. E.g. all useful Newton-Cotes rules can be
computed without problems, in contrast to hand derivations, which usually end with Simpson's
rule.
We will prove these statements with some examples taken from our introductory courses in scientific
computing. We use Maple V Release 4, but the examples could also be reproduced e.g. with
Mathematica, MuPad or any other computer algebra system.
One of the first formulas that students learn is Newton's iteration to solve a nonlinear equation
Given an approximation x k for the root s, a better approximation x can be
obtained using the iteration function
This iteration converges quadratically to a single root s of f(x). This can be proven by computing
the first derivative of F (x) at which is zero.
dF := f(x) D (2) (f)(x)
(2)
D (2) (f)(s)
The arbitrary precision floating point arithmetic, which is provided by most computer algebra
systems, can be used to demonstrate what quadratic convergence means "in real life". As example,
we compute the square root of 9 using Newton's iteration to solve equation x starting
with As expected, the number of correct digits doubles with each iteration.
Digits := 80:
to 8 do xk := F(xk); lprint(xk); od:3.40000000000000000000000000000000000000000000000000000000000000000000000000000003.00009155413138017853055619134813458457312886243991760128175783932249942778667893.00000000000000000032526065174565133021986825552331682604822127820501315653354913.0000000000000000000000000000000000000000000000000000000000000000000000000000518
The convergence result is only valid for single roots, as the first derivative of f appears in the
denominator of (2), i.e. the result is only valid if f 0 The behavior of Newton's iteration for
an equation with multiple roots is the next topic we want to discuss using Maple. Let us assume
that f(x) has a zero of multiplicity n at We therefore define f(x) to be
where Again we inspect the first derivative of F (x). If F 0 then the iteration
converges only linearly.
x\Gammas
Taking the limit of the above expression for
We have just proven, that Newton's iteration converges linearly with the factor (n \Gamma 1)=n if f(x)
has a zero of multiplicity n. Thus e.g. convergence is linear with the factor 1=2 for double roots.
Newton's iteration also has a nice geometrical interpretation. Starting with the approximation x k ,
the next value x k+1 of the iteration is the intersection of the tangent at f(x) in [x k ; f(x k )] with
the x-axes. This property can also be proven with Maple. We set up an equation
for the tangent line. p(x) must interpolate [x k ; f(x k )] and must have the same derivative as f(x)
at these two conditions the parameters a and b of the tangent p(x) are defined.
We have claimed that the intersection of the tangent p(x) with the x-axes is the next Newton
approximation. If the equation solved again the iteration function (1) is obtained. This
proves that the geometrical interpretation is correct.
3 Secant Method
The secant method (3) approximates the derivative which appears in Newton's formula by a finite
difference.
As with Newton's method, we want to analyse the convergence with Maple. Using equation (3)
we obtain for the error e s the recurrence
s:
The right hand side can be expanded into a multivariate Taylor series at e
We assume that s is a single root, (f 0 must hold. We set
compute the first term of the Taylor series expansion:
(D (2)
If we divide this leading coefficient by e 0 and e 1 , we see that the limit of the quotient e 2 =e 0 =e 1
is a constant different from zero. We assume that the convergence coefficient is p and substitute
This equation is valid for all errors e 0 . Since the right hand side is constant, the left hand side must
also be independent of e 0 . This is only the case, if the exponent of the power of e0 is zero. This
condition is an equation for p which solved gives the well known convergence factor
5)=2
for the secant method.
Having considered the Newton and the secant method, we now want to derive and analyze a new
iteration method (a combination of the Newton and the secant method) to compute a root of a
nonlinear equation. The new method shall use the function values and the first derivatives at two
points. These four data define a degree three (Hermite) interpolation polynomial. A zero of this
polynomial can be taken as next approximation of the root. Unfortunately, the explicit expression
for this zero is rather complex, therefore we propose to use inverse interpolation for the given data.
We then only need to evaluate the resulting polynomial at to obtain the new approximation
for the root. In Maple, this is done with the following commands.
p(-f(x1))=x1, D(p)(-f(x1))=-1/D(f)(x1)-,
The resulting expression is still not very simple However, if the evaluation of f and f 0 is very
expensive, it may still pay off since the convergence rate is 2.73 as we will see. For the convergence
analysis we expand
s:
into a multivariate Taylor series at e as we have done for the secant method.
(D (3) )(f)(s) D(f)(s) (D (2) (D (2) (D (4)
As before with the secant and Newton method we consider only single roots, i.e., we assume that
condition holds, then the above equation tells us that in the limit
const
Let us again introduce the convergence coefficient p and make the following substitutions:
const
const
This equation must hold for all errors e 0 . Since K, p and const are all constant, the exponent of
must be zero.
[2:732050808; \Gamma:732050808]
Thus, the convergence factor is
3 and we have super-quadratic convergence.
With the help of Maple we want to demonstrate the above convergence rate for an example. We
use our algorithm to compute the zero of the function starting with x
1. For every iteration we print the number of correct digits (first column) and its ratio to the
number of correct digits in the previous step (second column). This ratio should converge to the
convergence rate 2:73. We see that this is the case.
Digits := 500:
for i to 6 do x2 := evalf(F(x0,x1));
else lprint(evalf(d2,20), evalf(d2/d1,20))
-7.8214175487946676893 2.8597847216407742880
-23.118600850923949542 2.9558070140989569391
-63.885801375143936026 2.7633939349141321183
-481.80650538330786806 2.7373103717659224742
5 Newton-Cotes Rules
Simpson's rule is a well known quadrature rule for approximating a definite integral
a
using three equidistant function values at the endpoints and in the middle of the integration
interval. The formula is obtained by interpolating the three function values and by computing the
integral of the interpolating polynomial of degree two. The following Maple statements may be
used to derive Simpson's rule. We first define a polynomial of degree two. We then state the
interpolation conditions and solve the resulting linear system for the coefficients of the polynomial.
Finally we integrate the polynomial and simplify the result.
What is the error of this integration rule? Discretization errors can be computed very simply by
appropriate series expansions. Let then we obtain
En :=
a
90 (D (4) )(f)(a) h 5
This shows that the error is proportional to h 5 . For the composite Simpson rule (n intervals of
length 2h, b \Gamma a = 2nh) the error therefore is b\Gammaa
in (a; b).
Instead of computing the interpolation polynomial, we can use the Maple function interp to
interpolate a polynomial through the points (0; y 0 ); (h; y 1 ) and (2h; y 2 ). By integration we obtain
the same result as above.
Notice that we have normalized the rule by dividing through the interval length.
More generally, we obtain Newton-Cotes Rules by interpolating values with given
equidistant nodes and by integrating the degree n interpolation polynomial. The following procedure
generates such a n + 1-point normalized Newton-Cotes Rule.
int(interp([seq(i*h, i=0.n)], [seq(y[i], i=0.n)], z), z=0.n*h))/(n*h):
With this procedure we can e.g. construct the Trapezoidal rule (n = 1), the Milne Rule
the Weddle Rule
For we obtain the following equidistant 9-point rule which is used by the Matlab function
quad8.
In [7] we can find one sided formulas that can also be generated with Maple in the same way. E.g.
given four equidistant function values (three intervals), find an approximation for the integral over
the 3rd interval:
6 Approximating Derivatives
When replacing derivatives by finite differences one uses relations like e.g.
They are obtained by computing derivatives of the corresponding interpolation polynomial. Relation
(4) is obtained by the Maple statement
Again, we can determine the discretization error with the help of a series expansion.
(D (2) )(y)(x)
Similarly, with the statements
an approximation for y 0 (x) is obtained. The discretization error of this approximation is also of
3 (D (3) )(y)(x)
4 (D (4) )(y)(x) h 3
7 Gauss-Quadrature
The idea of Gauss-Quadrature is to find nodes x i and weights w i so that the quadrature rule
is exact for polynomials of degree as high as possible. For we have to determine the six
unknowns We demand exact values for the integrals of the monomes x j
eqns
eqns := fw1 x1
We can solve this system with Maple:
5g
However, this brute force approach will not work for all values of n. For larger n the system of
nonlinear equations become too complicated for Maple. One has to add some more sophisticated
theory to compute the rules.
It is our goal to find nodes and weights to get an exact rule for polynomials of degree up to 2n \Gamma 1.
We can argue as follows: consider the decomposition of P 2n\Gamma1 by dividing by some polynomial
Qn (x) of degree n:
Applying rule (7) on both sides and subtracting yields the following expression for the error:
error :=
Now it is easy to see that we can make the error to zero by the following choices. First take Qn (x)
as the orthogonal polynomial on the interval [\Gamma1; 1] to the scalar product
By this choice and by the definition of orthogonal polynomial the first term in the error vanishes:
is a Legendre Polynomialavailable in Maple as orthopoly[P](n,x).
Second, choose as the nodes the (real) zeros of Qn . Then the second term in the error will also
Finally compute the weights according to Newton-Cotes by integrating the interpolation polynomial
for which is of course again Rn\Gamma1 by the uniqueness of the interpolation polynomial.
Thus Z 1
and the two last error term cancel.
So, we can compute a Gauss quadrature rule e.g. for with the following Maple statements:
\Gamma:1252334085; :1252334085; :3678314990; :5873179543; :7699026742; :9041172564;
We note that numerical errors occur (the weights should be symmetric) because we are computing
the rules here in a well-known unstable way. However, Maple offers us more precision by increasing
the value of Digits. With two runs of the above statements with different precision we are able
to obtain the rules correct to the amount of decimal digits we want.
The Gauss-Lobatto quadrature rule on [\Gamma1; 1] using the end points and two intermediate points
can be computed with a computer algebra system as follows ([1]). Considering the symmetry of
the formula, we make the ansatz
A
and require it to be exact for
If we like to compute a Kronrod extension by adding three more points (by symmetry one of them
will be 0), we could make the ansatz
A
and require exactness for
Thus our rule becomes
For a more elaborated treatment on generating Gauss Quadrature formulas symbolically we refer
to [8].
8 Generation of Explicit Runge-Kutta Formulas
In this section we show how a computer algebra system can be used to derive explicit Runge-Kutta
formulas. Such formulas are used to solve systems of differential equations of the first order. The
solution of the initial value problem
can be approximated by a Taylor series around x k , which is obtained from (8) by repeated differentiation
and replacing y 0 (x) by f
appears.
y (i)
@
@x
f
f
f y
The idea of the Runge-Kutta methods is to approximate the Taylor series (9) up to order m by
using only values of f
\Delta and no derivatives of it.
The general form of a s-stage explicit Runge-Kutta method is
a
s
where s is the number of "stages" and a i;j , b i and c i are real coefficients
To derive the coefficients of such a method, the series expansion of (9) and (10) are equated. This
leads to a set of nonlinear equations for the parameters a i;j , b i and c i which has to be solved.
For this derivation we have to compute the Taylor series expansions of (9) and (10). Maple knows
how to expand a function with two parameters that both depend on x, but we have to inform
Maple, that y 0 (x) has to be replaced by f
whenever it appears. We do this by overwriting
the derivative of the operator y.
In this result, D 1 (f)(x; y(x)) stands for the derivative of f with respect to the first argument, i.e.
\Delta . In order to make the result more readable, we define some alias substitutions.
We are now ready to derive the parameters of a Runge-Kutta formula for which is of order
3:
The variable TaylorPhi corresponds to \Phi in equation (9).
For the Runge-Kutta scheme we get the following Taylor series. Note that we keep the parameters
a i;j , b i and c i in symbolic form. RungeKuttaPhi corresponds to \Phi in equation (10).
The difference d of the two polynomials TaylorPhi and RungeKuttaPhi should be zero. We
consider d as a polynomial in the unknowns h, F, Fx, Fy, Fxx, etc. and set the coefficients of that
polynomial to zero. This gives us a nonlinear system of equations which has to be solved.
eqns := -coeffs(d, [h,F,Fx,Fy,Fxx,Fxy,Fyy])-;
eqns
\Gammab3
\Gammab3
For we found two (parameterized) solutions which can be represented by the following
coefficient schemes. Note that in the first solution, the unknowns c 3 and b 3 are free parameters,
i.e. they can take on any value. This is indicated by the entries in the
solution set. For the second solution, a 3;2 is a free parameter (see also Figure 1). From the second
solution we get the method of Heun of third order if we set a
Figure
1: Three Stage Runge-Kutta Methods of Order 3.
For further information on this topic we refer to [4].
9 Methods of Ritz and Galerkin
This section shows an example where the computer algebra systems takes over the hard hand work
and the students can understand the general steps of the method. Consider the boundary value
problem
For
2 x) the solution is
Equation (11) is obtained if we want to minimize the functional
In the method of Ritz the solution u is approximated by an ansatz (a linear combination of known
functions OE
Introducing y(x) in (12) we obtain a quadratic form in the unknown coefficients c j . Minimizing
this quadratic form gives us values for c j and an approximation y(x).
The principle is easy to describe but to compute a concrete example is rather tedious, even if we
choose only
and OE 2
Both ansatz-functions do satisfy the
boundary conditions.
With Maple we can compute and plot the approximation with the following statements:
x
Not every differential equation minimizes a functional. In the method of Galerkin one tries to
solve (11) also by the ansatz (13). The goal is to choose the coefficients c j such that the residual
small in some sense. For this we have to choose another set of functions
(x)g. The coefficients c j are now computed in such a way that the residual is orthogonal
to the space spanned by the functions /
We obtain this way again a system of linear equations for the coefficients c j .
x
Conclusions
In this paper we have given some examples on how we use computer algebra systems in our scientific
computing courses. Many numerical methods and classical proofs can be developed with only a
few statements in a computer algebra system.
However, it might be difficult sometimes to find the right ones, this method therefore is in general
still restricted to the reproduction of classical results. The use of a computer algebra system also
requires much experience, as it is not always easy to find an elegant way to the result one is
expecting.
We also made the experience that it may be particularly complicated to convince a computer
algebra system to perform a specific task. As an example take the convergence analysis of section 3.
We came across the expression (a were interested in the exponent if this expression is
written as a b . How is b obtained? Right, taking the logarithm to the base a. The result however
does not simplify, even not after using the simplify command.
a p a
a
What is the problem? Maple does not know as much as we do. Maple cannot simplify this
expression as it assumes a and p to be complex numbers. Obviously, a and p are real and positive
in our context, but Maple has to be informed about this fact using the assume facility. This
can be done directly in the simplify command for all the indeterminants which appear in the
expression to be simplified or for each unknown with the assume command. The symbol ~ signals
that assumptions have been made on a variable.
a is assumed to be real and positive
is assumed to be real
Look also at the discussion of the discretization error in chapter 6. We have computed a series
expansion of an expression comparable to the following one:
@x
Which leading term do you expect if this expression is expanded into a series? Maple gives you
the following answer:
@x
This result may be surprising for a student. The leading coefficient is indeed zero, but Maple does
not recognize this zero automatically. In general, it is a particularly difficult problem to recognize
zeros, but in this example the above result can be simplified using a special option to the command
simplify.
Computer algebra systems still have to progress. They are not yet a replacement for paper and
pencil.
We also do not want to conceal that computer algebra systems still have bugs and produce erroneous
results or results which are only valid under some assumptions. It is also very important to
demonstrate this fact to students. Students should learn, that results cannot blindly be trusted.
Whenever a numerical method is derived, the result has to be compared with ones expectations.
Nevertheless, a computer algebra system is a very powerful tool to be used in teaching numerical
methods. Many examples in this article have proven it. Further examples on the use of computer
algebra systems can be found e.g. in [3, 2, 5].
--R
Adaptive Quadrature - Revisited
The Billiard Problem
Symbolic Computation of Explicit Runge-Kutta Formulas
The Maple Technical Newsletter
Scientific Computing
chapter
--TR | finite elements;numerical methods;computer algebra;maple;rayleigh-ritz;galerkin method;runge-kutta method;nonlinear equations;quadrature formulas |
333849 | Edge-Bandwidth of Graphs. | The edge-bandwidth of a graph is the minimum, over all labelings of the edges with distinct integers, of the maximum difference between labels of two incident edges. We prove that edge-bandwidth is at least as large as bandwidth for every graph, with equality for certain caterpillars. We obtain sharp or nearly sharp bounds on the change in edge-bandwidth under addition, subdivision, or contraction of edges. We compute edge-bandwidth for Kn, Kn,n, caterpillars, and some theta graphs. | INTRODUCTION
A classical optimization problem is to label the vertices of a graph with distinct integers
so that the maximum difference between labels on adjacent vertices is minimized. For
a graph G, the optimal bound on the differences is the bandwidth B(G). The name arises
from computations with sparse symmetric matrices, where operations run faster when the
matrix is permuted so that all entries lie near the diagonal. The bandwidth of a matrix
M is the bandwidth of the corresponding graph whose adjacency matrix has a 1 in those
positions where M is nonzero. Early results on bandwidth are surveyed in [2] and [3].
In this paper, we introduce an analogous parameter for edge-labelings. An edge-
numbering (or edge-labeling) of a graph G is a function f that assigns distinct integers
to the edges of G. We let B 0 (f) denote the maximum of the difference between labels
assigned to adjacent (incident) edges. The edge-bandwidth B 0 (G) is the minimum of B 0 (f)
over all edge-labelings. The term "edge-numbering" is used because we may assume that
f is a bijection from E(G) to the first jE(G)j natural numbers.
We use the notation B 0 (G) for the edge-bandwidth of G because it is immediate that
the edge-bandwidth of a graph equals the bandwidth of its line graph. Thus well-known
y Research supported in part by NSA/MSP Grant MDA904-93-H-3040.
Running head: EDGE-BANDWIDTH
AMS codes: 05C78, 05C35
Keywords: bandwidth, edge-bandwidth, clique, biclique, caterpillar
Written June, 1997.
elementary bounds on bandwidth can be applied to line graphs to obtain bounds on edge-
bandwidth. We mention several such bounds. We compute edge-bandwidth on a special
class where all these bounds are arbitrarily bad.
The relationship between edge-bandwidth and bandwidth is particularly interesting.
Always (G), with equality for caterpillars of diameter more than k in which
every vertex has degree 1 or k + 1. Among forests, B 0 (G) - 2B(G), which is almost sharp
for stars. More generally, if G is a union of t forests, then
Chv'atalov'a and Opatrny [5] studied the effect on bandwidth of edge addition, con-
traction, and subdivision (see [22] for further results on edge addition). We study these
for edge-bandwidth. Adding or contracting an edge at most doubles the edge-bandwidth.
Subdividing an edge decreases the edge-bandwidth by at most a factor of 1=3. All these
bounds are sharp within additive constants. Surprisingly, subdivision can also increase
edge-bandwidth, but at most by 1, and contraction can decrease it by 1.
Because the edge-bandwidth problem is a restriction of the bandwidth problem, it
may be easier computationally. Computation of bandwidth is NP-complete [17], remaining
so for trees with maximum degree 4 [8] and for several classes of caterpillar-like graphs.
Such graphs generally are not line graphs (they contain claws). It remains open whether
computing edge-bandwidth (computing bandwidth of line graphs) is NP-hard.
Due to the computational difficulty, bandwidth has been studied on various special
classes. Bandwidth has been determined for caterpillars and for various generalizations
of caterpillars ([1,11,14,21]), for complete k-ary trees [19], for rectangular and triangular
grids [4,10] (higher dimensions [9,15]), for unions of pairwise internally-disjoint paths with
common endpoints (called "theta graphs" [6,13,18]), etc. Polynomial-time algorithms exist
for computing bandwidth for graphs in these classes and for interval graphs [12,20]. We
begin analogous investigations for edge-bandwidth by computing the edge-bandwidth for
cliques, for equipartite complete bipartite graphs, and for some theta graphs.
2. RELATION TO OTHER PARAMETERS
We begin by listing elementary lower bounds on edge-bandwidth that follow from
standard arguments about bandwidth when applied to line graphs.
PROPOSITION 1. Edge-bandwidth satisfies the following.
a subgraph of G.
are the components of G.
c)
Proof: (a) A labeling of G contains a labeling of H. (b) Concatenating labelings of the
components achieves the lower bound established by (a). (c) The edges incident to a single
vertex induce a clique in the line graph. The lowest and highest among these labels are at
least
PROPOSITION 2.
l
diam (L(H))
Proof: This is the statement of Chung's ``density bound'' [3] for line graphs. Every labeling
of a graph contains a labeling of every subgraph. In a subgraph H, the lowest and
highest labels are at least e(H) \Gamma 1 apart, and the edges receiving these labels are connected
by a path of length at most diam (L(H)), so by the pigeonhole principle some consecutive
pair of edges along the path have labels differing by at least (e(H) \Gamma 1)=diam (L(H)).
Subgraphs of diameter 2 include stars, and a star in a line graph is generated from
an edge of G with its incident edges at both endpoints. The size of such a subgraph is at
most yielding the bound B 0 (G) E(G). This
is at most \Delta(G) \Gamma 1, the lower bound from Proposition 1. Nevertheless, because of the
way in which stars in line graphs arise, they can yield a better lower bound for regular or
nearly-regular graphs. We develop this next.
PROPOSITION 3. For F ' E(G), let @(F ) denote the set of edges not in F
that are incident to at least one edge in F . The edge-bandwidth satisfies
Proof: This is the statement of Harper's ``boundary bound'' [9] for line graphs. Some set
F of k edges must be the set given the k smallest labels. If m edges outside this set have
incidences with this set, then the largest label on the edges of @F is at least k +m, and
the difference between the labels on this and its incident edge in F is at least m.
2.
Proof: We apply Proposition 3 with Each edge uv is incident to d(u)
other edges. Some edge must have the least label, and this establishes the lower bound.
Although these bounds are often useful, they can be arbitrarily bad. The theta graph
is the graph that is the union of m pairwise internally-disjoint paths with
common endpoints and lengths l . The name "theta graph" comes from the case
3. The bandwidth is known for all theta graphs, but settling this was a difficult process
finished in [18]. When the path lengths are equal, the edge-bandwidth and bandwidth both
equal m, using the density lower bound and a simple construction. The edge-bandwidth
can be much higher when the lengths are unequal. Our example showing this will later
demonstrate sharpness of some bounds.
Our original proof of the lower bound was lengthy. The simple argument presented
here originated with Dennis Eichhorn and Kevin O'Bryant. It will be generalized in [7] to
compute edge-bandwidth for a large class of theta graphs.
Example A. Consider
a denote the edges of the ith path of length 3, and let e be the edge incident to all
a i 's at one end and to all c i 's at the other end. Since
2, the first k
edges in the list a are together incident to exactly m other edges,
and larger sets are incident to at most edges. Thus the best lower bound from
Proposition 3 is at most m.
Nevertheless, 3)=2e. For the upper bound, we assign the
labels in order to a's, b's, and c's, inserting e before b dm=2e . The difference between labels
of incidence edges is always at most m except for incidences involving e, which are at most
since e has the middle label.
To prove the lower bound, consider a numbering f
)gg. Comparing the edges
with labels ff; f(e); ff 0 yields I be the interval [ff \Gamma k; ff
By construction, I contains the labels of all a's, all c's, and e. If f(a
then also I. By the choice of ff; ff 0 , avoiding this requires ff
each label is assigned only once and the label f(e) cannot play this
role, only ff \Gamma ff 0 of the b's can have labels outside I. Counting the labels we have forced
into I yields jIj - On the other hand,
Thus k - (3m \Gamma 3)=2, as desired.
3. EDGE-BANDWIDTH VS. BANDWIDTH
In this section we prove various best-possible inequalities involving bandwidth and
edge-bandwidth. The proof that steps. All steps are con-
structive. When f or g is a labeling of the edges or vertices of G, we say that f(e) of g(v)
is the f-label or g-label of the edge e or vertex v. An f-label on an edge incident to u is an
incident f-label of u.
LEMMA 5. If a finite graph G has minimum degree at least two, then
Proof: From an optimal edge-numbering f (such that B 0 we define a
labeling g of the vertices. The labels used by g need not be consecutive, but we show that
when u and v are adjacent.
We produce g in phases. At the beginning of each phase, we choose an arbitrary
unlabeled vertex u and call it the active vertex. At each step in a phase, we select the
unused edge e of smallest f-label among those incident to the active vertex. We let f(e) be
the g-label of the active vertex, mark e used, and designate the other endpoint of e as the
active vertex. If the new active vertex already has a label, we end the phase. Otherwise,
we continue the phase.
When we examine a new active vertex, it has an edge with least incident label, because
every vertex has degree at least 2 and we have not previously reached this vertex. Each
phase eventually ends, because the vertex set is finite and we cannot continue reaching
new vertices. The procedure assigns a label g(u) for each u 2 V (G), since we continue to
a new phase as long as an unlabeled vertex remains.
It remains to verify that E(G). Suppose that
each vertex is assigned the f-label of an incident
edge, we have incident to u; v, respectively. If the edge uv is one of e; e 0 , then e and
e 0 are incident, which implies that jg(u) \Gamma
Otherwise, we have some other value c. We may assume that a ! b by
symmetry. If a ! c and
Thus we may assume that b ? c. In particular, g(v) is not the least f-label incident to v.
The algorithm assigns v a label when v first becomes active, using the least f-label
among unused incident edges. When v first becomes active, only the edge of arrival is a
used incident edge. Thus g(v) is the least incident f-label except when v is first reached via
the least-labeled incident edge. In this case, g(v) is the second smallest incident f-label.
Thus c is the least f-label incident to v and v becomes active by arrival from u. This
requires a and eliminates the bad case.
LEMMA 6. If G is a tree, then
Proof: Again we use an optimal edge-numbering f to define a vertex-labeling g whose
adjacent vertices differ by at most B 0 (f ). We may assume that the least f-label is 1,
occurring on the edge View the edge e
as the root of G. For each vertex
be the f-label of the edge incident
to x along the path from x to the root.
If xy 2 E(G) and xy 6= uv, then we may assume that y is on the path from x to the
root. We have assigned and g(y) is the f-label of an edge incident to y, so
Our labeling g fails to be the desired labeling only because we used 1 on both u and
v. Observe that the largest f-label incident to uv occurs on an edge incident to u or on
an edge incident to v but not both; we may assume the latter. Now we change g(u) to 0.
Because the differences between f(uv) and f-labels on edges incident to u were less than
produces the desired labeling g.
THEOREM 7. For every graph G,
Proof: By Proposition 1b, it suffices to consider connected graphs. Let f be an optimal
edge-numbering of G; we produce a vertex labeling g. Lemma 6 applies when G is a
tree. Otherwise, G contains a cycle, and iteratively deleting vertices of degree 1 produces
a subgraph G 0 in which every vertex has degree at least 2. The algorithm of Lemma
5, applied to the restriction of f to G 0 , produces a vertex labeling g of G 0 in which (1)
adjacent vertices have labels differing by at most B 0 (f ), and (2) the label on each vertex
is the f-label of some edge incident to it in G 0 .
To obtain a vertex labeling of G, reverse the deletion procedure. This iteratively adds
a vertex x adjacent to a vertex y that already has a g-label. Assign to x the f-label of the
edge xy in the full edge-numbering f of G. Now g(x) and g(y) are the f-labels of two edges
incident to y in G, and thus The claims (1) and (2) are preserved,
and we continue this process until we replace all vertices that were deleted from G.
A caterpillar is a tree in which the subtree obtained by deleting all leaves is a path.
One of the characterizations of caterpillars is the existence of a linear ordering of the edges
such that each prefix and each suffix forms a subtree. We show that such an ordering is
optimal for edge-bandwidth and use this to show that Theorem 7 is nearly sharp.
PROPOSITION 8. If G is a caterpillar, then B G be the
caterpillar of diameter d in which every vertex has degree k or 1. If d ? k, then
Proof: Let G be a caterpillar. Let v be the non-leaf vertices of the dominating
path. The diameter of G is d. Number the edges by assigning labels in the following order:
first the pendant edges incident to then the pendant edges incident to v 2 ,
then are incident only at ordering places all pairs
of incident edges within positions of each other. Since B
G, equality holds.
For a caterpillar G with order n and diameter d, Chung's density bound yields B(G) -
G be the caterpillar of diameter d in which every vertex has degree k
1. We have vertices of degree k
d ? k, we have B(G) - k.
On the other hand, we have observed that B 0 (G) - caterpillars. By
Theorem 7, equality holds throughout for these special caterpillars.
Theorem 7 places a lower bound on B 0 (G) in terms of B(G). We next establish an
upper bound. The arboricity is the minimum number of forests needed to partition the
edges of G.
THEOREM 9. If G has arboricity t, then
inequality is almost sharp for stars.
Proof: Given an optimal number g of V (G), we construct a labeling f of E(G). Let
be a decomposition of G into the minimum number of forests. In each component
of each G i , select a root. Each edge of G i is the first edge on the path from one of its
endpoints to the root of its component in G i ; for e 2 E(G i ), let v(e) denote this endpoint.
Each vertex of each forest heads toward the root of its component in that forest along
exactly one edge, so the f-labels of the edges are distinct. Each f-label arises from the g-
label of one of its endpoints. Thus the f-labels of two incident edges arise from the g-labels
of vertices separated by distance at most 2 in G. Also the indices of the forests containing
these edges differ by at most t \Gamma 1. Thus when e; e 0 are incident we have
The star with m edges has bandwidth dm=2e and edge-bandwidth so the
equality is within one of optimality when G is a star.
4. EFFECT OF EDGE OPERATIONS
In this section, we obtain bounds on the effect of local edge operations on the edge-
bandwidth. The variations can be linear in the value of the edge-bandwidth, and our
bounds are optimal except for additive constants. We study addition, subdivision, and
contraction of edges.
THEOREM 10. If H is obtained from G by adding an edge, then
(G). Furthermore, for each k there are examples
Proof: The first inequality holds because G is a subgraph of H. For the second, let g
be an optimal edge-numbering of G; we produce an edge-numbering f of H such that
If e is not incident to an edge of G, form f from g by giving e a new label higher
than the others. If only one endpoint of e is incident to an edge e 0 of G, form f by leaving
the g-labels less than g(e 0 ) unchanged, augmenting the remaining labels by 1, and letting
Thus we may assume that the new edge e joins two vertices of G. Our construction
for this case modifies an argument in [22]. Let e i be the edge such that g(e i
q be the smallest and largest indices of edges of G incident to e,
respectively, and let
The idea in defining f from g is to "fold" the ordering at r, renumbering out from
there so that e p and e q receive consecutive labels, and inserting e just before this. The
renumbering of the old edges is as follows
Finally, let After the edges with g-labels higher than
q or lower than p are exhausted, the new numbering leaves gaps. For edges
we have jf(e the possible added 1 stems from the insertion
of e. When r is between i and j, the actual stretch is smaller.
It remains to consider incidences involving e. Suppose that e is incident to e.
Note that 1 - f(e 0 may assume that 1 - f(e
and e q are incident to the same endpoint of e, then 1 -
If e p and e q are incident to opposite endpoints of e, then e 0 is incident to e p or e q . In these
two cases, we have differs from p or q,
respectively, by at most B(g), we obtain 1
The bound is nearly sharp. Let G be the caterpillar of diameter
of degree used in Proposition 8. Recall that G has vertices and that
k. We form H by adding the edge . The graph H is a cycle of
length k plus pendant edges; each vertex of the cycle has degree k except for two adjacent
vertices of degree k + 1. The diameter of L(H) is bk=2c edges. By
Proposition 2, we obtain B 0 (H)
Subdividing an edge uv means replacing uv by a path u; w; v passing through a new
vertex w. If H is obtained from G by subdividing one edge of G, then H is an elementary
subdivision of G. Edge subdivision can reduce the edge-bandwidth considerably, but it
increases the edge-bandwidth by at most one.
THEOREM 11. If H is an elementary subdivision of G, then (2B 0 (G)+
these bounds are
sharp.
Proof: Suppose that H is obtained from G by subdividing edge e. From an optimal edge-
numbering g of G, we obtain an edge-numbering of H by augmenting the labels greater
than g(e) and letting the labels of the two new edges be g(e) and g(e) + 1. This stretches
the difference between incident labels by at most 1.
To show that this bound is sharp, consider
In Example A, we proved that deleting fb i g from the optimal
numbering there, we obtain a numbering that yields B 1. The graph G 0 is
obtained from G by a sequence of m edge subdivisions, half of which must increase the
edge-bandwidth.
To prove the lower bound on B 0 (H), we consider an optimal edge-numbering f of
H and obtain an edge-numbering of G. For the edges introduced to form H after
deleting e, let may assume that
by leaving the f-labels below p and in [r decreasing those in
[p +1; r] and above q by one, and setting r. The differences between labels on edges
belonging to both G and H change by at most one and increase only when the difference
is less than B 0 (f ). For incidences involving e, the incident edge ffl was incident in H to
e 0 or e 00 . The difference In the
first case, the difference increases by r In the second, it increases by
. Whether B 0 (H)
is even or odd, this establishes the bound claimed.
To show that this bound is sharp, let G be the graph formed by adding
edges at each vertex of K 3 . This graph has 3k edges, and the diameter of its line graph
is 2, so 2. Let x; z be the vertices of the triangle,
with sets X;Y;Z of incident pendant edges, respectively. Let H be the graph obtained
by subdividing xz to obtain x 0 incident to x and z 0 incident to z. Since L(H) has 3k
edges and diameter 3, we have B 0 (H) - k. Assigning 3k labels to E(H) in the order
Replacing x 0 and z 0 with a label for xz in the
middle of Y yields Whether k is even or odd, this achieves the
bound proved above.
Contracting an edge uv means deleting the edge and replacing its endpoints by a single
combined vertex w inheriting all other edge incidences involving u and v. Contraction
tends to make a graph denser and thus increase edge-bandwidth. In some applications,
one restricts attention to simple graphs and thus discards loops or multiple edges that arise
under contraction. Such a convention can discard many edges and thus lead to a decrease
in edge-bandwidth. In particular, contracting an edge of a clique would yield a smaller
clique under this model and thus smaller edge-bandwidth. For the following theorem, we
say that H is an elementary contraction of G if H is obtained from G by contracting
one edge and keeping all other edges, regardless of whether loops or multiple edges arise.
Edge-bandwidth is a valid parameter for multigraphs.
THEOREM 12. If H is an elementary contraction of G, then
these bounds are sharp for each value of B 0 (G).
Proof: Let e be the edge contracted to produce H. For the upper bound, let g be an
optimal edge-numbering of G, and let f be the edge-numbering of H produced by deleting
e from the numbering. In particular, leave the g-labels below g(e) unchanged and decrement
those above g(e) by 1. Edges incident in H have distance at most two in L(G), and
their distance in L(G) is two only if e lies between them. Thus the difference between
their g-labels is at most 2B 0 (g), with equality only if the difference between their f-labels
is
Equality holds when G is the double-star (the caterpillar with two vertices of degree
vertices of degree 1) and e is the central edge of G, so H is the star K 1;2k .
We have observed that B 0
For the lower bound, let f be an optimal edge-numbering of H, and let g be the edge-
numbering of G produced by inserting e into the numbering just above the edge e 0 with
lowest f-label among those incident to the contracted vertex w in H. In particular, leave f-
labels up to f(e 0 ) unchanged, augment those above f(e 0 ) by 1, and let )+1. The
construction and the argument depend on the preservation of loops and multiple edges.
Edges other than e that are incident in G are also incident in H, and the difference between
their labels under g is at most one more than the difference under f . Edges incident to
e in G are incident to e 0 in H and thus have f-label at most f(e
g-label differs from that of e 0 by at most B 0 (f ).
The lower bound must be sharp for each value of B 0 (G), because successive contractions
eventually eliminate all edges and thus reduce the bandwidth.
5. EDGE-BANDWIDTH OF CLIQUES AND BICLIQUES
We have computed edge-bandwidth for caterpillars and other sparse graphs. In this
section we compute edge-bandwidth for classical dense families, the cliques and equipartite
complete bipartite graphs. Give the difficulty of bandwidth computations, the existence
of exact formulas is of as much interest as the formulas themselves.
2.
Proof: Lower bound. Consider an optimal numbering. Among the lowest
values there must be edges involving at least dn=2e vertices of Kn . Among the highest
there must be edges involving at least bn=2c vertices of Kn . Since
incident edges with labels among the lowest
among the highest
1. Therefore,
l nm
nk
l nm
Upper bound. To achieve the bound above, let X;Y be the vertex partition with
ng. We assign the lowest
values to
the edges within X. We use reverse lexicographic order, listing first the edges with higher
vertex 2, then higher vertex 3, etc. We assign the highest
values to the edges within
Y by the symmetric procedure. Thus
Note that the lowest label on an edge incident to vertex dn=2e is 1
The labels between these ranges are assigned to the "cross-edges" between X and Y .
The cross-edges involving the vertex dn=2e 2 X receive the highest of the central labels,
and the cross-edges involving dn=2e not dn=2e) receive the lowest of these
labels. Since the highest cross-edge label is
and the lowest label of an edge
incident to dn=2e is 1+
, the maximum difference between labels on edges incident
to dn=2e is precisely the lower bound on B 0 (Kn ) computed above. This observation holds
symmetrically for the edges incident to dn=2e
We now procede iteratively. On the high end of the remaining gap, we assign the
values to the remaining edges incident to dn=2e \Gamma 1. Then on the low end, we assign values
to the remaining edges incident to dn=2e + 2. We continue alternating between the top
and the bottom, completing the edges incident to the more extreme labels as we approach
the center of the numbering. We have illustrated the resulting order for K 8 . Each time we
insert the remaining edges incident to a vertex of X, the rightmost extreme moves toward
the center at least as much from the previous extreme as the leftmost extreme moves
toward the left. Thus the bound on the difference is maintained for the edges incident to
each vertex. The observation is symmetric for edges incident to vertices of Y .
For equipartite complete bipartite graphs, we have a similar construction involving
low vertices, high vertices, and cross-edges.
\Gamma 1.
Proof: Lower bound. We use the boundary bound of Proposition 3 with
1.
Every set of k edges is together incident to at least n vertices, since a bipartite graph
with n vertices has at most k \Gamma 1 edges. Since K n;n has 2n vertices, at most
edges remain when these vertices are deleted. Thus when jF
We construct an ordering achieving this bound. Let
fy be the partite sets. Order the vertices as We alternately
finish a vertex from the beginning of L and a vertex from the end. When finishing
a vertex from the beginning, we place its incident edges to vertices earlier in L at the
end of the initial portion of the numbering f that has already been determined. When
finishing a vertex from the end of L, we place its incident edges to vertices later in L at
the beginning of the terminal portion of f that has been determined. We do not place an
edge twice. When we have finished each vertex in each direction, we have placed all edges
in the numbering. For example, this produces the following edge ordering for
It suffices to show that for the jth vertex v j 2 L, there are at least n
edges that come before the first edge incident to v or after the last edge incident to v. For
are exactly
edges before the first appearance of v j and exactly
edges after its last appearance, which matches the argument in the lower
bound. As j decreases, the leftmost appearance of v j moves leftward no more quickly than
the rightmost appearance; we omit the numerical details. The symmetric argument applies
--R
The bandwidth problem for graphs and matrices - a survey
Labelings of graphs
Optimal labelling of a product of two paths
The bandwidth problem and operations on graphs
The bandwidth of theta graphs
The edge-bandwidth of theta graphs (in preparation)
Complexity results for bandwidth minimization.
Optimal assignments of numbers to vertices
On the bandwidth of triangulated triangles
Computing the bandwidth of interval graphs
The bandwidth of the graph formed by n meridian lines on a sphere (Chi- nese
The bandwidth of caterpillar graphs
Compression operators and a solution to the bandwidth problem of the product of n paths
The bandwidth minimization problem for caterpillars with hair length 3 is NP-complete
Bandwidth of theta graphs with short paths.
Bandwidth of the complete k-ary tree
An O(n log n) algorithm for bandwidth of interval graphs
Maximum bandwidth under edge addition
--TR
--CTR
Balogh , Dhruv Mubayi , Andrs Pluhr, On the edge-bandwidth of graph products, Theoretical Computer Science, v.359 n.1, p.43-57, 14 August 2006
Tiziana Calamoneri , Annalisa Massini , Imrich Vro, New results on edge-bandwidth, Theoretical Computer Science, v.307 n.3, p.503-513, 14 October
Oleg Pikhurko , Jerzy Wojciechowski, Edge-bandwidth of grids and tori, Theoretical Computer Science, v.369 n.1, p.35-43, 15 December 2006 | bandwidth;edge-bandwidth;biclique;caterpillar;clique |
333895 | Backward Error Analysis for Numerical Integrators. | Backward error analysis has become an important tool for understanding the long time behavior of numerical integration methods. This is true in particular for the integration of Hamiltonian systems where backward error analysis can be used to show that a symplectic method will conserve energy over exponentially long periods of time. Such results are typically based on two aspects of backward error analysis: (i) It can be shown that the modified vector fields have some qualitative properties which they share with the given problem and (ii) an estimate is given for the difference between the best interpolating vector field and the numerical method. These aspects have been investigated recently, for example, by Benettin and Giorgilli in [ J. Statist. Phys., 74 (1994), pp. 1117--1143], by Hairer in [Ann. Numer. Math., 1 (1994), pp. 107--132], and by Hairer and Lubich in [ Numer. Math., 76 (1997), pp. 441--462]. In this paper we aim at providing a unifying framework and a simplification of the existing results and corresponding proofs. Our approach to backward error analysis is based on a simple recursive definition of the modified vector fields that does not require explicit Taylor series expansion of the numerical method and the corresponding flow maps as in the above-cited works. As an application we discuss the long time integration of chaotic Hamiltonian systems and the approximation of time averages along numerically computed trajectories. | Introduction
. In this paper, we consider the relationship between solutions
to a given system of ordinary differential equations (vector fields)
d
dt
numerical approximations
to them, and solutions to associated modified equations
d
dt
The vector fields ~
are formulated in terms of an asymptotic expansion in the
step-size ffit, i.e., are chosen such that the numerical solution can formally be inter-
preted, with increasing index i, as the more and more accurate solution of the modified
equation. Previous papers on backward error analysis for differential equations include
those by Warming & Hyett [38], Griffiths & Sanz-Serna [12], Beyn [6],
Feng [10], Fiedler & Scheurle [11], and Sanz-Serna [31]. Another early reference
to related ideas is Moser [24] who discusses the approximation of a symplectic
map near an equilibrium by the flow map of a Hamiltonian vector field.
More recently, general formulas for the computation of the modified vector fields
~
derived by Hairer [16], Calvo, Murua & Sanz-Serna [7], Be-
nettin & Giorgilli [5], and Reich [26]. In papers by Neishtadt [25], Benettin
& Giorgilli [5], and Hairer & Lubich [18], the question of closeness of the numerical
approximations and the solutions of the modified equations has been addressed.
Konrad-Zuse-Zentrum, Takustr. 7, D-14195 Berlin, current address: Department of Mathematics
and Statistics, University of Surrey, Guildford, Surrey GU2 5XH, UK (S.Reich@surrey.ac.uk).
In particular, it has been shown in these papers that the difference can be made
exponentially small in the step-size ffit, i.e.
provided the vector field Z and the numerical one step method \Psi ffit are real analytic
are appropriate constants, \Phi ffit; ~
denotes the time-ffit-flow map
of the vector field ~
, and the index i (ffit) has been chosen such that the difference
is minimized.
Backward error analysis is of utmost importance for an understanding of the
qualitative behavior of symplectic methods [32] for Hamiltonian problems. It has
been shown by Hairer [16], Calvo, Murua & Sanz-Serna [7], Reich [26], and
Benettin & Giorgilli [5] that for symplectic discretizations, the modified vector
fields ~
are Hamiltonian. For special cases see also the papers by Auerbach &
Friedman [4] and Yoshida [40]. The Hamiltonian structure of the modified equations
implies that a symplectic integrator almost preserves the total energy over an
exponentially long period of time [25, 23, 5, 18]. Similarly, the adiabatic invariant of
a Hamiltonian system with rapidly rotating phase is also preserved over an exponentially
long periods of time provided a symplectic method is used [29].
The fact that symplectic methods lead to modified equations that are Hamiltonian
is a special instance of the so called geometric properties of backward error analysis.
By this we mean the following: If the vector field Z belongs to a certain class of
vector fields, like integral preserving or divergence-free vector fields, and the numerical
approximation \Phi ffit also preserves the corresponding quantities, then the modified
vector fields ~
will be in the same class as Z. Besides symplectic methods, special
instances of these geometric aspects have been discussed before. See, for example, the
papers by Reich [26], Hairer & Stoffer [20] and Gonzalez, Higham & Stuart
[13].
In this paper, we revisit backward error analysis by using a simple recursive
scheme for the definition of the modified vector fields ~
first proposed by the
author in the unpublished report [26]. The main advantage of this formulations is
that it does not require Taylor series expansions of the numerical one step method
\Psi ffit and the flow maps \Phi ffit; ~
in terms of the step size ffit as it used in the papers
by Benettin & Giorgilli [5], Hairer [16], and Hairer & Lubich [18]. This in
turn allows for a simple characterization 1 of the geometric properties of the modified
vector fields ~
and a rather simple proof for the exponentially small truncation
error (1.1). Our approach is close to the one discussed by Benettin & Giorgilli [5]
in the sense that we consider general one step methods 2 and that we use a "direct"
approach 3 . However, different techniques are used and we will discuss this in more
detail in x4.
In x5, we consider the numerical integration of a "chaotic" Hamiltonian system
by a symplectic method and discuss the approximation of time-averages along numerically
computed trajectories. We assume that a Poincar'e section [14] can be defined
and that the corresponding Poincar'e section is uniformly hyperbolic. Backward error
1 The general idea can already been found in the report [26].
Hairer & Lubich [18] consider methods that can be represented by P-series [19]. Note that
Runge-Kutta and partitioned Runge-Kutta methods fall under this category.
3 This is in contrast to the "indirect" approach used by Neishtadt [25] where the one step method
is first interpolated by the flow of a time-dependent vector field and averaging in time is then used
to obtain an optimal approximating time-independent vector field.
analysis and the shadowing lemma [33] will be used to show that a numerically computed
trajectory stays close to an exact solution over exponentially long periods of
time. This and a large deviation theorem [36] allow us to discuss the convergence of
long time averages along numerically computed trajectories. The anisotropic Kepler
problem [15] will serve us as a numerical illustration. This problem requires the application
of a symplectic variable step-size method as first discussed by the author in
the technical report [27] and independently by Hairer in [17].
2. The Modified Vector Field Recursion. Let us consider a smooth vector
field
d
dt
its discretization by a one step method [19]
We assume that \Psi ffit : U ae R n ! R n is a smooth map and a method of order p 1,
i.e.
for all x 2 U where \Phi ffit;Z is the time-ffit-flow map of the differential equation (2.1).
As in described in the Introduction, we look for a family of vector fields ~
such that
\Phi ffit; ~
or, equivalently,
for all ffit sufficiently small. Here \Phi 1;X denotes the time-one-flow map of the vector
field X(ffit). The family of modified vector fields X(ffit), ffit 0, is formally defined in
terms of an asymptotic expansion in the step-size ffit, i.e.
The formally infinite sequence of vector fields f\DeltaX i g i=1;::: ;1 can be obtained by
Taylor series expansion of the one step method \Psi ffit , i.e.
x the identity map, and comparison of this series with the expansion of the
time-one-flow map \Phi 1;X(ffit) in terms of ffit. The vector fields \DeltaX i are chosen such
that these two series coincide term by term. This is the general approach followed
by Benetttin & Giorgilli [5] and Hairer [16]. The two papers differ in the
way the Taylor series expansions are written down. But they lead to exactly the
same sequence of vector fields f\DeltaX i g i=1;::: ;1 . We obviously have \DeltaX
for a method of order p.
We now give a recursive definition of the modified vector field X(ffit) that does
not require an explicit Taylor series expansion. This recursion was introduced by the
author in the unpublished report [26]. First we formally introduce the "truncated"
expansions by means of
We obviously have
Let us assume that X i (ffit) has been chosen such that the difference between the time-
one-flow map of X i (ffit) and the numerical one step method \Psi ffit is O(ffit i+1 ). This
suggests to consider the following recursion:
\DeltaX
Indeed, this definition of \DeltaX i+1 implies that X i+1 (ffit), defined by (2.3), generates a
time-one-flow map that is O(ffit i+2 ) away from the numerical method \Psi ffit . This can
be seen from
Thus eqs. (2.3) and (2.4) recursively define the modified vector fields X i
1. The recursion is started with X 1 ffitZ. The generated sequence
is, of course, equivalent to the sequences obtained by using Taylor
series expansions as described in [16, 5].
Throughout this paper, we will exclusively work with the recusion (2.3)-(2.4).
In x3, it will be shown that this leads to a simple characterization of the geometric
properties of the modified vector fields and, in x4, explicit estimates for the difference
between the time-one-flow map of the modified vector field X i (ffit) and the numerical
method will be given. We like point out that these results can also be (and have
been [16, 5, 18, 20, 13]) derived using explicit Taylor series expansion of the flow map
and the numerical method. However, we feel that the application of the recursion
leads to a simplification in the presentation of these results.
3. Geometric Properties of Backward Error Analysis. In this section, we
consider differential equations (2.1) whose corresponding vector field Z belongs to a
certain linear subspace g of the infinite dimensional Lie algebra 4 of smooth vector
fields on R n [21],[1].
Assumption. Given a linear subspace g of the infinite dimensional Lie algebra
of smooth vector fields on R n , let us assume that there is a corresponding subset G of
the infinite dimensional Frechet manifold [21] of diffeomorphisms on R n such that
4 The algebraic operation is the Lie bracket [X;Y ] of two vector fields X and Y [3].
Here T id G is defined as the set of all vector fields X := @ [\Psi ] =0 for which the
one-parametric family of diffeomorphisms \Psi 2 G is smooth in and \Psi
For the linear space (Lie algebra) of Hamiltonian vector fields on R n this is, for
example, the subset of canonical transformations [1]. An important aspect of those
differential equations is that the corresponding flow map \Phi t;Z forms a one-parametric
subgroup in G [21],[1]. Especially in the context of long term integration, it is desirable
to discretize differential equations of this type in such a way that the corresponding
iteration map \Psi ffit belongs to the same subset G as \Phi t;Z . We will call those integrators
geometric integrators.
The following result concerning the backward error analysis of geometric integrators
has been first stated in the unpublished report [26]:
Theorem 3.1. Let us assume that the vector field Z in
d
dt
belongs to a linear subspace g of the Lie algebra of all smooth vector fields on R n . Let
us assume furthermore that
is a geometric integrator for this subspace g, i.e., \Psi ffit 2 G for all ffit 0 sufficiently
small. Then the perturbed vector fields X i defined through the
recursion (2.3)-(2.4) belong to g, i.e.
Proof. The statement is certainly true for us assume that it
also holds for for all ffit 0 sufficiently small. Since
and
for all ffit 0 sufficiently small as well as
we have
and \DeltaX g. This implies X i+1 (ffit) 2 g as required.
Remark. Often the linear subspace g is, in fact, a subalgebra under the Lie
bracket [3]
@
g. But this property is not needed in Theorem 3.1.
Let us discuss five examples:
Example 1. Consider the subspace g of all vector fields that preserve a particular
first integral F : R n ! R. In fact, this space is a subalgebra under the Lie bracket
(3.1). In other words
and
imply that
To show this we differentiate (3.2) w.r.t. x which gives
The same procedure is applied to (3.3). Using these identities and the definition
(3.1) in (3.4) yield the desired result. The corresponding subset G is given by the
F -preserving diffeomorphisms \Psi, i.e.
In fact, let \Psi be a smooth family of F -preserving diffeomorphisms with \Psi
then X := @ [\Psi since
Thus, T id and we can apply Theorem 3.1. In particular, if a numerical method
then the modified vector fields X i possess F as a first integral.
The same result was recently derived by Gonzalez, Higham & Stuart [13] using
a contradiction argument. \Pi
Example 2. Consider the Lie subalgebra of all divergence-free vector fields Z, i.e.
The corresponding subset G are the volume preserving diffeomorphisms,
i.e.
det
@
Again we have T id g. Namely:
@
@x \Psi (x)
trace [@ x@ \Psi (x)] =0
Thus, if the numerical method \Psi ffit is volume conserving, then
the modified vector fields X i are divergence-free. Again, the
same result has been formulated by Gonzalez, Higham & Stuart [13] using a
contradiction argument. \Pi
Example 3. Let an involution 5 S 2 R n\Thetan be given and consider the subspace g
of vector fields Z on R n that satisfy the time-reversal symmetry
This subspace is not a subalgebra under the Lie bracket (3.1). The corresponding
subset G is given by the time-reversible diffeomorphisms \Psi, i.e. \Psi
Let \Psi 2 G be smooth in with \Psi
\Theta S\Psi
which implies that X := @ [\Psi g. It follows that T id and we can apply
Theorem 3.1. Thus, if a numerical method \Psi ffit satisfies the time-reversal symmetry,
then the modified vector fields X i are time-reversible. This result
has been first proven by Hairer & Stoffer in [20] using ideas from [26]. \Pi
Example 4. Let f:; :g denote the Poisson bracket of a (linear) Poisson manifold
. Then the Lie algebra of Hamiltonian vector fields on P is given by
d
dt
R is a smooth function. The corresponding subset G is given by the
set of smooth diffeomorphisms on P that preserve the Poisson bracket f:; :g [1]. Let
\Psi be a family of maps in G with \Psi
for all smooth functions F; . This is the condition for
a vector field X to be locally Hamiltonian. Since P is simply connected, the vector
field is also globally Hamiltonian [3].
If the discrete evolution (2.2) satisfies \Psi ffit 2 G for all ffit ? 0, then \Psi ffit is called
a symplectic method and it follows from Theorem 3.1 that the modified vector fields
are Hamiltonian vector fields on P . This result can also be
found in [5, 16, 26].
If a symplectic method can be expanded as a P-series, then the vector fields X i (ffit)
are globally Hamiltonian even if the phase space P ae R n is not simply connected
[16]. This result applies to all symplectic Runge-Kutta and partitioned Runge-Kutta
methods. Furthermore, symplectic methods defined by a generating function of the
third kind [32] are also always globally Hamiltonian [5]. The same statement is true
for symplectic methods based on the composition of exact flow maps [40]. \Pi
Example 5. Let us now consider differential equations on a matrix Lie group
G ae R n\Thetan [35]. In general, time independent differential equations on G can be
written in the form
d
dt
5 An involution is a non-singular matrix that satisfies S
ae R n\Thetan the Lie algebra of G. Many recent papers (see [8]
and references therein) have been devoted to methods that preserve the Lie group
structure, i.e.
and Y n 2 G implies Y n+1 2 G. Thus, \Psi ffit is a diffeomorphism defined on the
submanifold G ae R n\Thetan . In fact, this submanifold can be characterized, at least
locally, by a set of nonlinear equations which we denote by F (Y
is an F integral preserving map, i.e. F ffi \Psi G. Following Example 1, we
know then that the modified vector fields (as well as the given vector field) satisfy
Hence the modified vector fields X i (ffit) are vector fields on G and
give rise to modified differential equations of type
d
dt
with
and ~
g. See [28] for further results on backward error analysis for
numerical methods on manifolds. \Pi
4. Truncation Error of Backward Error Analysis. We like to derive an
explicit estimate for the norm of the vector fields \DeltaX i+1 and the difference between
the time-one-flow map \Phi 1;X i (ffit) and the numerical approximation \Psi ffit ,
To do so we assume from now on that the vector field Z in (2.1) is real analytic. We
also introduce the following notations: Let B r denote the complex ball of
Let us consider a compact subset K ae R n of phase space and a constant r ? 0 such
that a given real analytic vector field Y is bounded on B r
we define
with
We also define B
To find an estimate for \DeltaX i+1 , as defined in (2.4), we need estimates for the mappings
appearing on the right hand side of (2.4). We start with an estimate for the map \Psi ffit .
Lemma 4.1. Let us assume that the vector field Z in (2.1) is real analytic and
that there is a compact subset K of phase space and constants K; R ? 0 such that
We also assume that the numerical method \Psi ffit is real analytic. Then there exists a
constant M K such that
Proof. Under the given assumptions, the flow map
Z Z(\Phi t;Z (x)) dt
is defined for complex valued 2 C where the integral on the right hand side is
independent of the path from zero to . The complexified flow map satisfies
Z jjZ(\Phi t;Z (x))jj jdtj
1). Consistency of the numerical method implies that there exists a constant
\DeltaK ? 0 such that
for the (complexified) map \Psi . Take M := K
Remark. Let us consider a s-stage Runge-Kutta method with coefficients
i;j=1;::: ;s and fb i g i=1;::: ;s [19] satisfying
s
s
d 1, and assume that the Runge-Kutta method uniquely 6 defines a real analytic
map \Psi for all step-sizes 2 C with j j R=K. Then we have
4.1. This follows from the fact that, under the stated assumptions, all stage variables
will be in BRK, where the vector field Z is bounded by the constant K. A similar
statement holds for partitioned Runge-Kutta methods. \Pi
Theorem 4.2. Let the assumptions of Lemma 4.1 be satisfied. Then there exists
a family of real analytic vector fields ~
, such that
with the order of the method. The family
of modified vector fields ~
X(ffit) satisfies the estimate
R
6 For an implicit method, the solution can be obtained by fixpoint iteration if j j is sufficiently
small.
with d p 1 a constant depending on the order p of the method. For example,
Proof. We know that X 1 and that \Psi ffit is a method of order p 1.
Thus jjX 1 ()jj R j jM and \DeltaX Next we find an estimate for
the difference between the time-one-flow map \Phi 1;X 1 () and the map \Psi . Using (4.1)
and (4.2), we obtain
Since the mappings are real analytic and their difference
is O(ffit p+1 ), we obtain the estimate [5]
1). Using this in (2.4) with
Next we show that
for 1). The estimate is true for
(compare (4.3)). We proceed by induction. First note that, for
We replace the parameter ff 2 [0; 1) in this formula by ff+
where
(j
0:067
This yields
for all ff 2 [0; 1) and all 2 C with
(j
Here we have used that
0:891 (4.6)
for 2. In particular, substitute j for j j and ff
in (4.5). Then use the and the inequality
(4.6) to derive
1). Next we introduce the vector-valued, real analytic function
and observe that
1). Here we have used that x 2 B ffR K implies \Phi t;X j () (x) 2
Now we can find an estimate for
the difference between the time-one-flow map \Phi 1;X j () and the map \Psi . Using (4.1)
and (4.7), we obtain
. Since the mappings are real analytic and their difference is O(ffit j+1 ), we
obtain the estimate [5]
1). Using this in (2.4) with finally obtain
which verifies (4.4) for
Next we need an estimate for the difference between the time-one-flow map
and the map \Psi ffit on the compact set K. Using (4.8) with
immediately have
R
The family of vector fields ~
X(ffit) is now defined by taking an optimal number i (ffit)
of iterations. We take i (ffit) as the integer part of
Thus
(ffit). This completes the first part of the
proof.
According to (4.5), the difference between the modified vector fields ~
X(ffit) and
Z is given by
R
R
Next we use
ffit R
c (i
to obtain
R
e
R
R
Here d p 1 is chosen such that
for all j 2.
Remark. The proof of Theorem 4.2 is similar in spirit to the one given by
Benettin & Giorgilli [5] on the exponentially small difference between an optimal
interpolating vector field and a near-to-the-identity map. However, there are a couple
of important differences: (i) We explicitly take the order of a method into account.
(ii) We directly derive estimates on the difference between the flow maps \Phi 1;X i () and
\Psi ffit instead of using Taylor series expansions of \Phi 1;X i () and \Psi ffit and corresponding
estimates for the elements in the series. We believe that this simplifies the proof of
Theorem 4.2. (iii) By introducing the parameter ff 2 [0; 1), we do not have to shrink
the domain of definition of the vector fields X i () as the iteration index i increases.
Again we feel that this simplifies the proof. (iv) As in [18], we work directly with an
estimate for the given vector field Z instead of making assumptions on the map \Psi ffit .
The rather pessimistic constants entering the estimates seem the
main disadvantage of our approach. \Pi
A more elaborate version of the proof of Theorem 4.2 can be found in [30].
5. An Application: Ergodic Hamiltonian Systems. Let us consider a (real
analytic) Hamiltonian system
d
dt
d
dt
together with a smooth function A : R 2n ! R. We are interested in
evaluating the time-average of A along a trajectory (q(t); p(t)) of the Hamiltonian
system (5.1)-(5.2), i.e
We assume that
exists and is equal to the micro-canonical ensemble average corresponding to the
Hamiltonian
i.e., we assume that the system (5.1)-(5.2) is ergodic 7 (or even mixing) [37]. Thus
R A(q; p)
Z
and
Z
the inner product of A and ffi(E \Gamma H).
Let us write the equations (5.1)-(5.2) in more compact form as
d
dt
. The Hamiltonian H is preserved under the flow map \Phi t;H .
Let us assume that the hypersurface M 0 of constant energy
7 To be more precise: Ergodicity of a system implies that the time average is equivalent to the
ensemble average except for, at most, a set of initial conditions of measure zero.
is a compact subset of R 2n . We also assume that there is a constant
that jjrxH(x)jj ? fl 1 for all x 2 M 0 . This implies that M 0 is a smooth
dimensional compact submanifold. Furthermore, the family of hypersurfaces
sufficiently small, are smooth and compact as well (in fact diffeomorphic to
We define the open subset U of phase space by
U :=
E2(\Gamma\DeltaE;+\DeltaE)
So far we have made fairly generic assumptions. In the sequel, we become more
specific to ensure that the Hamiltonian system (5.1)-(5.2) is ergodic/mixing.
In a first step we construct a Poincar'e return map [14]. Let R be a
smooth function and positive constant such that jf/; Hg(x)j ? fl 2 on the
level sets
\Deltas ? 0 sufficiently small. Let us assume that S s defines a Poincar'e section for each
in the following way: For all x 2 S s , there is a positive number
such that the solution x(t), t 0, with initial condition
s and there is no
such that x(t 0
. The positive number
is called the Poincar'e return time of the point x Knowing the Poincar'e
return time for each x 2 S s , we define the "global" Poincar'e map
and
We assume that the Poincar'e return times t p (x), x 2 V , are bounded by some constant
We are interested in the solutions on a particular level set of constant energy. For
simplicity, we take the level set M 0 . Then it is sufficient to consider the "restricted"
which is defined as the restriction of \Pi to
Thus we have reduced the study of the dynamical properties of the Hamiltonian system
(5.1)-(5.2) on the energy shell M 0 to the study of the properties of the Poincar'e
is an ergodic (mixing) map, then the Hamiltonian system is ergodic
(mixing) on M 0 . Note that \Pi 0 is volume preserving, i.e. det @x \Pi 0
From now on we assume that \Pi 0 is a uniformly hyperbolic map, i.e., for each
D, the linearization @x \Pi 0 (x) at x possesses strictly expanding and contracting
directions only [14],[36]. The "stochastic" behavior of such a (deterministic) map has
been investigated, for example, in [36]. Here we only point out the four main results:
ffl There is a unique invariant density 0 on D that is invariant under \Pi 0 .
Furthermore, 0 is given by the Lebesgue measure on D.
ffl The autocorrelation function hA of a Holder continuous function
exponentially fast, i.e.
an appropriate constant.
ffl The time-averages
of A along trajectories fx i g i=1;::: ;N of \Pi 0 satisfy a central limit theorem.
ffl The time-average hAiN of A along trajectories of \Pi 0 with initial value x
satisfy a large deviation theorem. To be more specific [39]: Given any c ? 0
there is a h(c) ? 0 such that
for all large N 1.
These results can be proven (see, for example, [36]) by carefully studying the properties
of the corresponding Frobenius-Perron operator defined
by
Definition 5.1. We call a Hamiltonian system (5.1)-(5.2) with the above introduced
properties Poincar'e hyperbolic. In particular, we assume (i) that the level
sets of constant energy are compact submanifolds, (ii) that
there is a constant that a global
Poincar'e map \Pi can be defined on
which is uniformly hyperbolic as a map restricted to that the
Poincar'e return times t p (x), x 2 V, are bounded by some constant K ? 0, and (v)
that there is a constant
Let e
H be a perturbation of H such that
for all x 2 U and some ffl ? 0. Then we call e
H an ffl-perturbation of H.
Lemma 5.2. The property of being Poincar'e hyperbolic is stable under ffl-
perturbations of the Hamiltonian H provided ffl is sufficiently small.
Proof. The assumption jjrxH(x)jj ? fl 1 on the level sets ME implies that these
sets are persistent under small perturbations. Furthermore, there exists a constant
~
perturbed Hamiltonian ~
H and x 2 V . Thus a
Poincar'e map is also defined for the perturbed Hamiltonian ~
H . Uniform hyperbolicity
is also stable under small perturbations of the Poincar'e map [2].
Let us discretize (5.1)-(5.2) by a symplectic (real analytic) integrator \Psi ffit of order
1.
Assumption. We assume that backward error analysis can be applied on a compact
subset K with U ae K. The corresponding perturbed Hamiltonian is denoted by
e
i.e., for all x 2 K,
constants. Let the step-size ffit be sufficiently small such that
the perturbed Hamiltonian system is also Poincar'e hyperbolic. For simplicity, we shift
the modified Hamiltonian e
H(ffit) such that H(x 0
Let us introduce a couple of notations for the perturbed system. As for the unperturbed
system, we define the compact level sets f
ME and the open set e
U (replacing
H by e
H in the definition). Without loss of generality, we can assume that e
U ae K.
Furthermore,
e
+\Deltas). The corresponding sets e
V and e
D are now defined in the obvious way.
Finally, the global Poincar'e map e
\Pi and the reduced Poincar'e map e
\Pi 0 are introduced
as for the unperturbed system. Again, without loss of the generality, we can assume
that D ae e
.
We extend the discrete time map \Psi ffit to a map \Psi t , t 2 [0; ffit], by using the exact
flow map \Phi t; e
H of the modified problem as an interpolation for t 2 [0; ffit). The map is
then extended to t ffit in the obvious way 8 as the composition of k steps with \Psi ffit
and one step with \Phi dt; e
Thus, in correspondence
with the definition of the global Poincar'e map
e
we define
for all x 2 e
. Here the Poincar'e return times e t p (x), x 2 e
V , are the same as in the
definition of e
\Pi. Lemma 5.2 implies that there is a constant e
sup
It follows from backward and forward error analysis [18] that there is a constant
for all x 2 e
V and for all ffit sufficiently small. More importantly, let fx i g i=1;::: ;N be a
"numerically" computed sequence of points with x
the corresponding sequence under the map e
\Pi with x
D. Each sequence fx i g,
respectively, generates two sequences of real numbers fE i g and fs i g, f e
fes i g respectively, which are defined by E
8 The resulting map is discontinuous at multiples of the step-size ffit. A smooth interpolation
could be defined. But this is not needed in the context of this paper.
and e s obviously have e
while The "drift"
in the values of E i and s i per step away from zero is exponentially small and sums up
linearly with the number of steps. This energy conserving property of a symplectic
method has been discussed by Benettin & Giorgilli [5] and Hairer & Lubich
[18]. The same exponentially slow drift follows for the sequence fs i g from
the Lipschitz constant of / on e
.
In other words, if we start initially on e
D, then the points computed "numerically"
with the Poincar'e map b
\Pi will stay in an exponentially small neighborhood of e
D over
exponentially many iterates of b
\Pi. Now, since our numerical method is of order p 1,
the compact manifolds f
and ME are O(ffit p ) away from each other, i.e., the
modified Hamiltonian e
H is an ffl-perturbation of the Hamiltonian H with ffl ffit p .
Thus the sequence fx i g will also stay in a O(ffit p ) neighborhood of D as long as the
number of iterates N satisfies
an appropriate constant.
Now the Shadowing Lemma [33] is applied to the sequence fx i g i=1;::: ;N .
Proposition 5.3. There exists an exact trajectory f"x i g i=1;::: ;N of the Poincar'e
map \Pi 0 on D such that the "numerically" computed sequence fx i g i=1;::: ;N stays in
a O(ffit p ) neighborhood of the (shadowing) exact trajectory provided the number of
iterates N satisfies (5.4).
Proof. We first project the sequence fx i g i=1;::: ;N down onto D using a "search
direction" orthogonal to the manifold D. Denote the result by fx i g. The projected
sequence fx i g and the sequence fx i g are O(ffit p ) close to each other provided N
satisfies (5.4). The "local" error per step between the "exact" Poincar'e map \Pi and
the "numerical" Poincar'e map b
\Pi is also of order p in the step-size ffit. This follows
from standard forward error analysis. Thus the Shadowing Lemma [33] for uniformly
hyperbolic maps can be applied to the Poincar'e map and the projected
sequence fx i g on D. The shadowing distance is O(ffit p ). This shadowing result also
applies to the sequence fx i g.
Let us now assume that we want to compute the ensemble average of a smooth
R up to a certain accuracy c ? 0. The large deviation theorem
for hyperbolic maps tells us that the probability to obtain the ensemble average
in the desired accuracy as the time average along a single trajectory goes to one
exponentially fast as the length N of the trajectory is increased. If we numerically
compute an approximative trajectory for the system (5.1)-(5.2), then we know from
Proposition 5.3 that this trajectory is O(ffit p ) close to some exact trajectory over
exponentially many integration steps N . Let us denote the time average of A along
this exact trajectory by hAi e
N and the numerically computed time average by hAiN ,
then
for all N satisfying a bound of type (4.4). Thus we obtain the following:
Proposition 5.4. Let (5.1)-(5.2) be a Poincar'e hyperbolic (real-analytic) system
which we discretize by a symplectic method of order p 1 in the step-size ffit. Then
the time-average hAiN of an observable A along a "numerically" computed trajectory
fxn gn=1;::: ;N ,
satisfies (5.5) where hAi e
N is the time-average along some exact trajectory and the
number of steps N satisfies a bound of type (5.4). Furthermore, assume we want to
compute the ensemble average of A within a given accuracy c ? 0. We assume, for
simplicity, that the constant c is larger than the difference between the time averages
(5.5) which is always true for sufficiently small step-sizes ffit. Then the probability to
obtain the average in the desired accuracy as the time average along a numerically
computed trajectory goes to one exponentially fast as the number of integration steps
N is increased. Taking the maximum number (5.4) of steps, the probability can be
made double exponentially close to one in (5.3) as ffit ! 0.
Example. As a numerical example, we look at the following planar anisotropic
Kepler problem [15]:
d
dt
d
dt
and
The initial conditions are chosen such that
and Note that angular momentum L is not conserved.
We define the Poincar'e section S 0 by record the sequence of points
Conservation of energy implies that the thus defined sequence is restricted
to the subset
This subset has an awkward shape. But it can be transformed
into a rectangle by means of the area preserving transformation
10=2. The corresponding Poincar'e map is hyperbolic,
i.e, stable and unstable manifolds intersect transversally, and the dynamics can be
encoded in a binary Bernoulli shift [15].
The main computational difficulty consists in the existence of a weak singularity
at To remove this singularity, we have to scale the equations of motion by
introducing the time transformation
which implies that, in the new time , the norm of the vector field remains bounded
at time transformation has to be introduced such that the transformed
equations of motion are still Hamiltonian. A constant step-size symplectic method
can then be used to integrate the transformed system. Let us describe the general
Assume that a Hamiltonian function H(q;
scaling function ae(q) are given. Following Zare & Szebehely [41], we introduce the
modified Hamiltonian function
with corresponding Hamiltonian equations of motion
d
d
d
d
d
d
d
d
in extended phase space R 2n \Theta R 2 . In particular, let us consider the case
p(0)). Then (5.9)-(5.11) can be simplified to
d
d
d
d
d
d
on the hypersurface of constant energy This is a scaled vector field as desired
but which is not Hamiltonian anymore. Therefore, as suggested by the author in
[27] and independently by Hairer [17], the Hamiltonian equations (5.9)-(5.11) are
discretized by a symplectic method and For example, the equations
can be discretized by the symplectic Euler method, i.e.
The method is explicit in the variable q. Unfortunately this implies that the method
is only first order in ffi . However, the method is symplectic and, therefore, the
time t
time
average
time
time
average
time
average
time t
time
average
(a)
time
actual
step-size
-22time t
angular
momentum
time
total
energy
(b)
Fig. 5.1. (a) The time evolution of the average hrin (mean distance) is shown for four different
initial conditions with equal initial energy evolution of the error in energy,
angular momentum, and the actual step-size. The bottom-right figure shows the intersections of the
trajectory with the Poincar'e section in the (X1 ; X2 ) coordinates. One thousand intersections are
plotted.
Hamiltonian is conserved to O(ffi ) over exponentially long
periods of time. A second-order symplectic discretization can be obtained by using
the second-order Lobatto IIIa-b partitioned Runge-Kutta formula [34], i.e.
The resulting scheme is implicit in ae(q).
This approach is applied to the anisotropic Kepler problem with a scaling function
We chose initial values such that The
equations of motion are integrated using the second order symplectic method (5.12)-
0:05. The time average of an observable A(q) along a trajectory
computed according to the recursive formula
hAin =t n
The time average of
was computed for four
different initial conditions and the evolution of the corresponding time averages hri n
can be found in Fig. 5.1(a). The different lengths of the time intervals are due to
the fact that the same number of steps with step-size ffi were taken which leads to
different actual step-sizes ffit n . Within a tolerance of c = 0:04, these averages converge
to the same value 1:33. In Fig. 5.1(b), the total energy H , the angular momentum
L, and the variation in the actual step-size can be found for a particular
trajectory. We also plotted the intersections of the trajectory with the (q x
in the (X Theoretically, these points should fill the
rectangle in a uniform way (the invariant meassure is the Lebesgue meassure). With
one thousand points plotted, the uniform distribution is satisfied quite well. \Pi
Acknowledgements
. I like to thank Ernst Hairer, Chus Sanz-Serna, Andrew
Stuart, and Claudia Wulff for comments on an earlier version of this paper and the
referees for making valuable suggestions.
--R
The Lie group structure of diffeomorphism groups and invertible Fourier integrals operators with Applications
Geometrical Methods in the Theory of Ordinary Differential Equations
Mathematical Methods of Classical Mechanics
On the Hamiltonian interpolation of near to the identity symplectic mappings with application to symplectic integration algorithms
Numerical methods for dynamical systems
Modified Equations for ODEs
Aspects of backward error analysis of numerical ODEs
Formal power series and numerical algorithms for dynamical systems
Discretization of homoclinic orbits and invisible chaos
On the scope of the modified equations
Qualitative properties of modified equations
Dynamical Systems and Bifurcations of Vector Fields
Chaos in Classical and Quantum Mechanics
Backward analysis of numerical integrators and symplectic methods
Variable time step integration with symplectic methods
The life-span of backward error analysis for numerical integrat- ors
Solving Ordinary Differential Equations
Reversible long-term integration with variable step sizes
The inverse function theorem of Nash and Moser
Lectures on Hamiltonian systems
The separation of motions in systems with rapidly rotating phase
Numerical integration of generalized Euler equations
Backward error analysis for numerical integrators
On higher order semi-explicit symplectic partitioned Runge-Kutta methods for constrained Hamiltonian systems
Preservation of adiabatic invariants under symplectic discretization
Dynamical Systems
Symplectic integrators for Hamiltonian problems: an overview
Rigorous verification of trajectories for the computer simulation of dynamical systems
Symplectic partitioned Runge-Kutta methods
Stochastic Dynamics of Deterministic Systems
Introduction to Ergodicity Theory
The modified equation approach to the stability and accuracy of finite-difference methods
Large deviations in dynamical systems
Construction of higher order symplectic integrators
Time transformations for the extended phase space
--TR
--CTR
P. F. Tupper, Computing statistics for Hamiltonian systems: A case study, Journal of Computational and Applied Mathematics, v.205 n.2, p.826-834, August, 2007
Ernst Hairer, Important Aspects of Geometric Numerical Integration, Journal of Scientific Computing, v.25 n.1, p.67-81, October 2005
Y.-K. Zou , W.-J. Beyn, On manifolds of connecting orbits in discretizations of dynamical systems, Nonlinear Analysis: Theory, Methods & Applications, v.52 n.5, p.1499-1520, February
B. Cano , A. Durn, Analysis of variable-stepsize linear multistep methods with special emphasis on symmetric ones, Mathematics of Computation, v.72 n.244, p.1769-1801, October
M. P. Calvo , A. Portillo, Are high order variable step equistage initializers better than standard starting algorithms?, Journal of Computational and Applied Mathematics, v.169 n.2, p.333-344, 15 August 2004
Brian E. Moore , Sebastian Reich, Multi-symplectic integration methods for Hamiltonian PDEs, Future Generation Computer Systems, v.19 n.3, p.395-402, April
Fong , Eric Darve , Adrian Lew, Stability of Asynchronous Variational Integrators, Proceedings of the 21st International Workshop on Principles of Advanced and Distributed Simulation, p.38-44, June 12-15, 2007
Jess A. Izaguirre , Scott S. Hampton, Shadow hybrid Monte Carlo: an efficient propagator in phase space of macromolecules, Journal of Computational Physics, v.200 n.2, p.581-604, November 2004
B. Cano , A. Durn, A technique to construct symmetric variable-stepsize linear multistep methods for second-order systems, Mathematics of Computation, v.72 n.244, p.1803-1816, October | hamiltonian systems;error analysis;long time dynamics;differential equations;numerical integrators |
333931 | Reduced Systems for Three-Dimensional Elliptic Equations with Variable Coefficients. | We consider large sparse nonsymmetric linear systems arising from finite difference discretization of three-dimensional (3D) convection-diffusion equations with variable coefficients. We show that performing one step of cyclic reduction yields a system of equations which is well conditioned and for which fast convergence can be obtained. A certain block ordering strategy is applied, and analytical results concerning symmetrizability conditions and bounds on convergence rates are given. The analysis is accompanied by numerical examples. | Introduction
. Consider the following three-dimensional (3D) convection-
di#usion equation
on a
domain# R 3 , subject to Dirichlet, Neumann, or mixed boundary condi-
tions, where all the functions in (1.1) are trivariate, and p, q, r > 0 on # Several
discretization schemes are possible. See Morton [11] for a comprehensive survey on
numerical solution of the convection-di#usion problem. In this work we use a seven-point
discretization technique as a starting point and extend the analysis of Elman
and Golub [4], done for the two-dimensional (2D) variable coe#cient case, and the
analysis of Greif and Varah [8], [9] for the 3D problem with constant coe#cients to
the 3D problem with variable coe#cients.
Let h denote the width of a uniform mesh. In the description that follows we use
the notation G i,j,k # G(ih, jh, kh), where G is a trivariate function. The seven-point
discretization is done as follows (see, e.g., [4] for the analogous 2D case). For the first
term in (1.1) we have
and an analogous discretization is performed for (qu y ) y and (ru z ) z . For the convective
terms su x , tu y , and vu z we use either upwind or centered di#erence schemes.
Let F denote the corresponding di#erence operator, scaled by h 2 , and denote the
values of the associated computational molecule by a i,j,k , b i,j,k , c i,j,k , d i,j,k , e i,j,k ,
f i,j,k , and g i,j,k , in the following manner: if (i, j, k) is a gridpoint not next to the
boundary, then
F
# Received by the editors November 10, 1997; accepted for publication (in revised form) by Z.
August 20, 1998; published electronically August 3, 1999.
Department of Computer Science, The University of British Columbia, Vancouver, BC, V6T 1Z4
Canada (greif@cs.ubc.ca).
x
y
z
e
f
c a d
(a) seven-point operator (b) reduced operator
Fig. 1.1. Computational molecules of the unreduced and the reduced operators.
The computational molecule is graphically illustrated in Figure 1.1(a) (in the figure
the subscripts are dropped).
If centered di#erences are used to discretize the convective terms, the values of
the computational molecule are given by
a
uses upwind schemes, then the type of scheme depends on the sign of the
convective terms. Assuming that s, t, and v do not change sign in the domain, if they
are positive one can use the backward scheme, and if they are negative one can use
the forward scheme. Discretizing using backward di#erences yields
a
and for forward di#erences (1.4) needs to be modified in an obvious manner.
The sparsity structure of the matrix representing the system of equations depends
on the ordering of the unknowns. A common strategy is the red/black ordering, which
is depicted in Figure 1.2: the gridpoints are colored using two colors in a checkerboard
fashion, and the points that correspond to one of the colors (say, red) are numbered
first. In this case the corresponding matrix can be written as
# u (r)
REDUCED SYSTEMS FOR 3D ELLIPTIC EQUATIONS 31
x
y
z
43 44
53 54
26
28
29 30Fig. 1.2. Red/black ordering of the 3D grid.
where both B and E are diagonal. In (1.5) superscripts (r) and (b) are attached to
denote the associated colors. A simple process of block Gaussian elimination leads to
a smaller system, for the black points only, which is called a reduced system [2]:
Since B is diagonal, the matrix of (1.6) is sparse. In the 3D case the corresponding
di#erence operator has a computational molecule which consists of 19 points, as illustrated
in Figure 1.1(b). Once the solution for the black points is computed, the
solution for the red points corresponds to solving a diagonal system and thus is readily
obtained. Moving from system (1.5) to system (1.6) amounts to performing one
step of cyclic reduction [2]. This procedure can be repeated until a small system of
equations is obtained, which can be solved directly. An overview of the idea of cyclic
reduction and several references are given in [5, pp. 177-180].
The elimination of half of the unknowns is accompanied by permutation of the
matrix (equivalently, reordering of the unknowns). Once the permuted reduced system
is formed, an iterative method can be used to find the solution. The procedure
of performing one step of cyclic reduction for a non-self-adjoint problem and solving
the resulting system using an iterative solver was extensively investigated by Elman
and Golub for 2D problems [2], [3], [4]. They showed that one step of cyclic reduction
leads to systems with valuable properties, such as symmetrizability by a real diagonal
nonsingular matrix for a large set of the underlying PDE coe#cients (which is e#ec-
tively used to derive bounds on the convergence rates of iterative solvers), and fast
convergence. In [2] the univariate case, which is naturally more transparent, is used
to illustrate the advantages of this technique. Many of the highly e#ective techniques
presented and used by Elman and Golub can be generalized to the 3D case and will
be mentioned throughout this paper.
An outline of the rest of this paper follows. In section 2 we introduce the cyclically
reduced operator. In section 3 we discuss block orderings and present a family of
orderings for the reduced grid. In section 4 we present symmetrization results. In
section 5 we use the results of section 4 to derive bounds on convergence rates for
block stationary methods. In section 6 we present numerical examples, which include
solving the systems using Krylov subspace solvers. Finally, in section 7 we draw some
conclusions.
2. The reduced operator. One step of cyclic reduction for a 3D model problem
with constant coe#cients has been described in [9], where full details on the
construction of the matrix are given. Convergence analysis and techniques for solving
the resulting system of equations using block stationary methods are described in [8],
[9]. In the case of constant coe#cients, the values of the computational molecule associated
with a given gridpoint do not depend on the point's coordinates, as opposed
to the variable coe#cient case. Nevertheless, the sparsity structure of the reduced
matrix is identical in both cases.
The explicit di#erence equation associated with the reduced operator for the 3D
variable coe#cient case can be found in [7] and should be used for constructing the
reduced matrix. The alternative of performing the matrix products in (1.6) might be
significantly more costly, especially in the 3D case, and in particular, in programming
environments where vectorization is crucial (e.g., Matlab).
Consider the following constant coe#cient model problem: p(x, y, y, z)
y, z) # 1, s(x, y, y, y, scaling by ah 2 ,
the di#erence equation has the form
R u
-2cf
Denote the continuous operator corresponding to this model problem
# .
The reduced operator can be derived directly as a discretization scheme of the original
PDE, with O(h 2 ) correction terms in the case of centered di#erence discretization and
O(h) correction terms if upwind discretization is used. This can be done by means
analogous to the techniques used for the 2D case (see Elman and Golub [3] and Golub
and Tuminaro [6]). Consider the centered di#erence discretization. Expanding (2.2)
in a multivariate Taylor expansion about the gridpoint (ih, jh, kh) yields, after scaling
by 2ah 2 ,
R
The above computation was carried out using Maple V. The O(h 2 ) terms contain,
among other terms, the expression - h 2
which can be thought
of as addition of artificial viscosity to the original equation. The reduced right-hand
side is equal to w i,j,k with an O(h 2 ) error. Gaussian elimination yields the following
right-hand side,
a
a
a
a
a
a
REDUCED SYSTEMS FOR 3D ELLIPTIC EQUATIONS 33
(a) Natural (b) Red/black
Fig. 3.1. Two possible orderings of the block grid. Each point in these grids corresponds to a
one-dimensional (1D) block of gridpoints in the underlying 3D grid.
whose Taylor expansion about the gridpoint (ih, jh, kh), after scaling by 2ah 2 , is
evaluated at the point (ih, jh, kh). This is
another similarity with the 2D case [3].
3. Block ordering strategies for the reduced grid. The question of ordering
is of major importance, as a good ordering strategy can lead to fast convergence. An
excellent overview of the literature that deals with ordering strategies is found in a
recent report by Benzi, Szyld, and van Duin [1]. For 3D problems it seems useful
to consider the ordering of blocks of unknowns, rather than "pointwise" ordering.
Such a strategy could be particularly useful for the cyclically reduced problem, as
the reduced grid is somewhat irregular. Instead of ordering the unknowns directly in
the 3D reduced grid, the problem of ordering is divided into two parts. First, define
blocks of gridpoints and order them in a tensor-product 2D "block grid." Once the
ordering of the blocks is determined, the task of the "inner" ordering in each of the
blocks is relatively simple.
We can define an x-oriented "1D block of gridpoints" by referring to a set of
gridpoints whose collection of all x-coordinate values include all the possible values
{jh} on the grid. A simple example is a single horizontal line of gridpoints in a tensor-product
3D grid. Similarly, y-oriented and z-oriented 1D blocks can be defined. Once
the 1D blocks of gridpoints are defined, a block computational molecule can be defined
as follows.
Definition 3.1. For a certain given 3D grid and a 1D block of gridpoints in it,
the associated block computational molecule is defined as the computational molecule
in the corresponding block grid. That is, its components are the 1D blocks in the
block grid, each of which contains at least one gridpoint which belongs to the (point)
computational molecule associated with the 3D problem.
Using the above, we can now easily define di#erent families and types of orderings.
For example, a certain ordering strategy is a natural block ordering strategy relative
to the 1D blocks of gridpoints if these blocks are ordered in the block grid using
natural lexicographic ordering. Similarly, one can define a red/black block ordering
strategy, and so on (see Figure 3.1).
Below we focus on a particular family of orderings for the reduced grid, which
we call the two-plane ordering. This ordering corresponds to defining each of the 1D
blocks of gridpoints as a collection of 2n gridpoints from two horizontal lines and two
adjacent planes (here n is the number of gridpoints in a single line in the original
34 CHEN GREIF
x
y
(a) 2PNxy (b) 2PRBxy
Fig. 3.2. Two types of two-plane ordering.
x
y
Fig. 3.3. Possible orientations of the 1D blocks of gridpoints in the set of natural two-plane
orderings.
unreduced grid). A single member of this family was introduced in [9]. In Figure 3.2
two members of the family are depicted: natural two-plane ordering and red/black
two-plane ordering. For notational convenience, we label them "2PN" and "2PRB,"
respectively. Two additional letters are added in order to distinguish between di#erent
orientations of the 2n-item 1D blocks of gridpoints. Let us illustrate this for the
specific case depicted in Figure 3.2(a). Here Indices 1-8, 9-16, 17-24, and 25-
are each an x-oriented 1D block. The block grid is of size 2-2 and its components
are ordered in natural lexicographic fashion. Each of the sets of indices 1-16 and
17-32 forms an x-y-oriented pair of planes. Hence the name 2PNxy. In Figure 3.3
other orientations of the blocks in the natural two-plane ordering are depicted.
Figures
3.4(a) and (b) illustrate what blocks are associated with a single gridpoint.
In these figures, each "X" corresponds to a 1D block which contains at least one
gridpoint in the "point" computational molecule associated with the selected gridpoint
(the one that is at the center of the computational molecule). As is evident, the
REDUCED SYSTEMS FOR 3D ELLIPTIC EQUATIONS 35
x
x
x
x
x
x
x
x
x
x
(a) even
h (c) computational molecule
Fig. 3.4. Block computational molecule associated with the two-plane ordering.
(a) natural (b) red/black based on 2D blocks
Fig. 3.5. Sparsity structures of two matrices which belong to the family of two-plane orderings.
structure depends on the parity of this gridpoint's index. The block computational
molecule (Figure 3.4(c)) is obtained by taking the union of all the 1D blocks associated
with each of the gridpoints in the block, and thus it is identical to the computational
molecule of the classical nine-point operator. This allows one to conclude, e.g., that
the reduced matrix does not have block property A [14] relative to partitioning into
blocks. On the other hand, applying a 4-color scheme to the blocks of
gridpoints can be e#ective for parallelization.
The same ideas as above can be applied to 2D blocks of gridpoints. The block
grid in this case is univariate. The reduced matrix is consistently ordered relative
to partitioning associated with 2D blocks. In Figure 3.5 the sparsity structures of
matrices corresponding to natural two-plane ordering and red/black block
ordering relative to 2D blocks are depicted.
In order to illustrate the e#ectiveness of the two-plane ordering, we present in
Figure
3.6 a single 2D block of a natural two-plane matrix vs. one that corresponds
to ("point") natural lexicographic ordering. The matrices in the figure are associated
with a 12 - 12 - 12 grid. As is evident, the main diagonal block of the two-plane
matrix is more dense. Compared to the natural lexicographic ordering, there are
significantly more nonzero entries in the block diagonal submatrix whose bandwidth
does not depend on n. As a result, direct preconditioner solves will be more e#cient
due to less fill-in. If a stationary scheme such as block Jacobi is used, by [13, Thm.
3.15] it is guaranteed that if the reduced matrix is an M-matrix, then the convergence
of a scheme associated with the two-plane matrix is faster than that of a scheme with
36 CHEN GREIF
(a) lexicographic (b) natural two-plane
Fig. 3.6. A zoom on 2D blocks of the matrices corresponding to two ordering strategies of the
reduced grid.
the lexicographic ordering. (The circumstances in which the reduced matrix is an
M-matrix in the constant coe#cient case are discussed in [9].)
4. Symmetrization of the reduced matrix. In order to obtain bounds on
convergence rates for block stationary methods, we consider the following technique,
suggested and used in [2], [3], [4] and also e#ectively applied in [8], [9]: if there exists
a real diagonal matrix Q, such that -
then for a splitting
and an analogous splitting of -
namely -
we have
C)
D)
A bound for the spectral radius of the iteration matrix can thus be obtained by
evaluating the expression on the right-hand side of (4.1). Denote the entries of S
by {s i,j } n 3 /2
. Since S is sparse and has a block structure, a small amount of work
is needed in order to find Q-by requiring for the entries of -
S, which we denote by
{-s
s j,i . Since matrices that correspond to di#erent orderings are
merely symmetric permutations of one another, we can pick a matrix that corresponds
to a specific ordering strategy and do all the work for it. This will result in obtaining
general symmetrization conditions for the reduced matrices (regardless of the ordering
used). Thus we pick the specific ordering strategy 2PNxz.
Let q # denote the #th diagonal entry in Q. Then
and the symmetry conditions can be expressed as
s j,#
It is su#cient to look at 2n - 2n blocks. We start with the main diagonal blocks.
We have to examine all the entries of the main block that appear in the #th row of
REDUCED SYSTEMS FOR 3D ELLIPTIC EQUATIONS 37
the matrix, namely s #-4 , s #-3 , . , s # , . , s #+4 . For s #-4 , if # mod 2n # 5 or
is equal to 0, then # - 4 corresponds to the (i - 2, j, point. Thus
#, -
a -
and from this it follows that
a i-1,j,k
a
c i-1,j,k c i,j,k
d i-2,j,k d i-1,j,k
In this case the values associated with the center of the computational molecule
(namely, a i,j,k ) are canceled, but this happens only for rows that involve the
gridpoints. Applying the same procedure
to the rest of the entries of the main diagonal block, we obtain the following:
a
a i-1,j,k
a
a i-1,j,k
c i,j+1,k e i,j,k
a i,j+1,k
a i-1,j,k
b i,j+1,k d i-1,j+1,k
a i,j+1,k
a i-1,j,k
a i-1,j,k
a i,j-1,k
d i-1,j,k e i-1,j-1,k
a i-1,j,k
a i,j-1,k
a i-1,j,k
a i,j,k+1
a i-1,j,k
a i,j,k+1
a
a i,j+1,k
a
a i,j+1,k
a
a i,j-1,k
a
a i,j-1,k
As is evident, (4.5)-(4.11) overdetermine the nonzero values of the matrix Q.
Indeed, (4.9)-(4.11) are su#cient to determine all the diagonal entries, except the first
entry in each 2n-2n block, which at this stage can be arbitrarily chosen. We have to
make sure, therefore, that (4.5)-(4.8) are consistent with these three equations, and
this requirement imposes some additional conditions. In the constant coe#cient case
there is unconditional consistency. The problematic nature of the variable coe#cient
case can be demonstrated simply by looking at one of the consistency conditions.
Consider a gridpoint (i, j, whose associated index, #, satisfies # mod
Applying (4.9) to # - 1 means looking at the row corresponding to the (i,
gridpoint, and multiplying (4.9), applied to # - 1, by (4.11) results in an equation
for q #
, which should be consistent with (4.8). There are three additional
consistency conditions for the main block and then eight additional conditions for
the rest of the blocks of the reduced matrix. In the consistency condition, if we
equate variables that belong to the same location in the computational molecule,
we find that su#cient conditions for the above-mentioned consistency conditions to
hold are b
similarly c , the consistency
condition becomes
d i-1 e j-1
which is obviously satisfied. The actual meaning of these conditions is that the continuous
problem is separable.
The analysis for o#-diagonal blocks is identical, and the following additional conditions
are obtained:
The two equations above determine the rest of the entries of the matrix, and only
the first entry in the symmetrizer can be determined arbitrarily.
Last, in order for the symmetrizer to be real, we must require that the products
have the same sign.
All that has been said can be summarized in the following theorem, which demonstrates
another point of similarity between the 2D [4] and the 3D problems with
variable coe#cients.
Theorem 4.1. Suppose the operator of (1.1) is separable. If c i d i-1 , b j e j-1 , and
are all nonzero and have the same sign for all i, j, and k, then there exists a
real nonsingular diagonal matrix Q such that Q -1 SQ is symmetric.
The symmetrized computational molecule can be derived without actually performing
the similarity transformation. For example, the symmetrized value corresponding
to - c i,j,k c i-1,j,k
a i-1,j,k
is
a i-1,j,k
, and so on. The symmetrization
operation should not actually be performed in order to solve the linear system,
as the symmetrizing matrix has entries that are unbounded as h goes to zero. The
symmetrization should be done for the mere purpose of convergence analysis.
5. Bounds on convergence rates for block stationary methods. The reduced
matrix which corresponds to the family of natural two-plane orderings is of
size (n 3 /2) - (n 3 /2) and can be thought of as a block tridiagonal matrix, relative to
S i,j are block tridiagonal matrices with respect to 2n - 2n blocks.
REDUCED SYSTEMS FOR 3D ELLIPTIC EQUATIONS 39
In [8] two partitionings are considered: a partitioning into 1D blocks (2n - 2n
blocks) and a partitioning into 2D blocks (of size n 2
In order to find a bound,
if symmetrization is possible, then the strategy in [4] can be applied. In Theorem
5.1, we refer to the ordering strategy 2PNxz. Since other orderings are symmetric
permutations of 2PNxz, finding the bounds for other ordering strategies discussed in
this paper is straightforward.
Theorem 5.1. Suppose the continuous problem is separable and c i+1 d i , b j+1 e j ,
and f k+1 g k are all positive and bounded by # x , # y , and # z , respectively. Suppose also
that a i,j,k # for all i, j, and k. Denote -
. Then the spectral radii of the
iteration matrices associated with the block Jacobi scheme which correspond to 1D
splitting and 2D splitting are bounded by #
# and #
respectively, where #, and
# are defined as follows:
(5.2a)
(5.2c)
Proof. The proof follows by using the technique of [4, pp. 346-347]. The conditions
stated in the theorem guarantee that the matrix is symmetrizable. Denote the reduced
matrix by S and the symmetrized matrix by -
S. Suppose S # is obtained by modifying
S in the following manner: replace each occurrence of c i and d i by # x , replace each
occurrence of b j and e j by # y , replace each occurrence of f k and g k by # z , and
replace each occurrence of a i,j,k by #. Denote by S the splitting, which
is analogous, as far as sparsity structure is concerned, to the splitting
the 1D splitting, the matrix D # is block diagonal with semibandwidth 4, its sparsity
structure is identical to that of -
D, and moreover, it is componentwise smaller than or
equal to the entries of -
D. By [9, Lem. 3.3], D # is an irreducible diagonally dominant
M-matrix.
The matrix C # is nonnegative and satisfies C # -
C. Thus the Perron-Frobenius
theorem [13, p. 30] can be used to obtain an upper bound on the convergence rate
for this splitting. Since the matrix S # can now be referred to as a symmetrized
version of a matrix that is associated with a constant coe#cient case, the bound
on the convergence rate is readily obtained from [9, Thm. 3.15]. For the 2D splitting
the procedure is completely analogous and the bound is obtained from
[8, Thm. 3.6].
We remark that estimates for the convergence rates of block Gauss-Seidel and
block SOR schemes can be obtained by using the "near property A" analysis presented
in [7], [8].
6. Numerical experiments. In the examples that follow, we begin with some
results which validate the convergence analysis of section 5 for stationary methods.
We then compare the performance of Krylov subspace solvers for the reduced and the
unreduced systems. The experiments were performed on an SGI Origin 2000 machine.
The program is written in Matlab 5.
6.1. Test problem 1. Consider the separable problem
y, z)
Table
Comparison between the computed spectral radii of the block Jacobi iteration matrices and the
bounds, using centered di#erences for the two splittings, with
Splitting 1D 2D
y, z) is
constructed so that the solution is
u(x, y,
For notational convenience, let
2 . Suppose h is
su#ciently small and centered di#erence discretization is performed. Then
hence
If
k the bounds are obtained
in an identical manner. The center of the computational molecule is a = 6. In terms
of the PDE coe#cients, the condition on # means that the convergence analysis of
section 5 is applicable if the PDE coe#cients are O(n).
If the above conditions hold, the matrix is symmetrizable. Using the notation of
the previous sections, let -
S be the symmetrized matrix, let #
be a modified version of -
S, such that
each occurrence of c i+1 d i , b j+1 e j , and f k+1 g k in -
S is replaced by the upper bounds,
namely # x , # y , and # z , respectively. Since S # -
is a symmetrized version of
a matrix corresponding to the constant coe#cient case, and by [9, Lem. 3.3], it is a
diagonally dominant M-matrix. Using Theorem 5.1, the bounds on the convergence
rate of the block Jacobi method are given in Table 6.1. As is evident, the bounds are
tight even for small n. It should be noted, however, that as the PDE coe#cients grow
larger, the bounds are not expected to be as tight, as the inequalities in (6.4) become
less e#ective.
Next, the spectral radii of the block Jacobi and block Gauss-Seidel iteration
matrices and an approximation to the optimal SOR parameter have been computed
for both the upwind scheme and the centered scheme. The relaxation parameter was
computed according to the formula 2
. It is only an approximation to the
optimal SOR parameter since the matrix is only "close" to being consistently ordered
[8]. The experiments were done on an 8 - 8 - 8 grid (512 gridpoints). In Tables 6.2
and 6.3 the superscripts c and u stand for centered or upwind, respectively, R and U
stand for reduced system and unreduced system, respectively, and the subscripts J
and GS stand for Jacobi and Gauss-Seidel, respectively. We present the results for
REDUCED SYSTEMS FOR 3D ELLIPTIC EQUATIONS 41
Table
Spectral radius of the block Jacobi, block Gauss-Seidel, and approximate optimal relaxation
parameter for the reduced system, using both upwind and centered di#erences and 1D splitting.
# u (R) # c
Table
Spectral radii of the block Jacobi, block Gauss-Seidel, and optimal relaxation parameter for the
unreduced system, using both upwind and centered di#erences with 1D splitting.
two di#erent cases, one of which has convection of moderate size, and the other with
large convection, for which the upwind scheme is more e#ective than the centered
scheme.
Note that for the unreduced system, in most cases the matrix satisfies all the
conditions required for Young's SOR analysis [14]; thus, the spectral radius of the
Gauss-Seidel matrix and the optimal relaxation parameter can be computed from
the spectral radius of the block Jacobi matrix. By comparing Tables 6.2 and 6.3
it is evident that stationary solvers for the reduced system converge faster than for
the unreduced system. In one case there is convergence for the reduced system and
divergence for the unreduced system.
Moving to consider Krylov subspace solvers, in Table 6.4 we make a comparison
between the performance of solvers for the two systems. The stopping criterion was
relative residual smaller than 10 -10 . The method that is used is nonpreconditioned
Bi-CGSTAB. The table presents information on the complete process, namely, construction
of the systems and the iterative solves. The increase in iteration counts
as the grid is refined agrees with theory, at least if one assumes that for this well
conditioned and mildly nonsymmetric system, the condition number is of magnitude
O(h -2 ) and the convergence rate is similar to that of the conjugate gradient method
for symmetric positive definite systems. When one step of cyclic reduction is applied,
the savings become more dramatic as the systems grow larger. An explanation for
this is that the construction of the reduced system, which requires significantly more
floating point operations compared with the construction of the unreduced system,
becomes a less significant factor in the overall computation as the grid becomes finer.
In general, since the iterative solve is the costly component of the computation, it is
significant that the number of iterations until convergence of the unreduced solver is
larger by a factor of approximately 2 compared with the reduced solver. Figure 6.1
illustrates the saving and the convergence behavior for this problem.
In
Table
6.5 we provide some numerical evidence which suggests that the good
performance of reduced solvers is due to e#ective preconditioning of the original ma-
trix. In the table, estimates of the condition numbers for
with upwind di#erence discretization are presented. The estimates were obtained using
Matlab's command ``condest.'' The factor of approximately 2 has been obtained
for several additional cases that have been tested. More observations on the condition
Table
Comparison between the performance of the unreduced and the reduced solvers for increasing
mesh size, using nonpreconditioned Bi-CGSTAB, for
Iterations Mflops Time (sec.)
Unreduced Reduced Unreduced Reduced Unreduced Reduced
iterations
relative
residual
Fig. 6.1. Relative residuals for nonpreconditioned Bi-CGSTAB applied to linear systems arising
from discretization of the test problem, with grid. The
residual associated with the reduced solver is the lower curve.
number of the reduced matrix can be found in [7].
6.2. Test problem 2. Consider the nonseparable problem
y, z)
y, z) is
constructed so that the solution is (6.2). For this problem the convergence analysis of
section 5 does not apply. Results are given in Table 6.6. The experiments were done
so that the tensor-product grid has 13, 824 gridpoints. GMRES(5) [12]
was used, preconditioned by ILU with drop tolerance of 10 -3 . The stopping criterion
was ||r i ||/||r 0 In all cases that have been tested, setting up and solving
the reduced system is faster compared with setting up and solving the unreduced
system. It should be noted that the CPU times are a#ected by a long preconditioner
setup time. When convection dominates, the centered scheme performs poorly. (In
general, this scheme su#ers numerical instability when the Reynolds numbers are large
[10].) Additional numerical experiments indicate that both solvers have the property
that for centered di#erence discretization the solver of mildly nonsymmetric systems
converges faster than the solver of a close-to-symmetric system. This phenomenon
was proved analytically for stationary methods for the constant coe#cient case in [2]
(2D) and in [9] (3D).
REDUCED SYSTEMS FOR 3D ELLIPTIC EQUATIONS 43
Table
Comparison between estimates of condition numbers of the unreduced matrix (denoted by U)
vs. the reduced matrix (R), for
Table
Comparison of CPU times (seconds) for setting up and solving the unreduced system and the
reduced system using ILU+GMRES. R and U stand for reduced and unreduced, respectively, and the
subscripts c and u stand for centered and upwind, respectively.
50 91.2 43.5 48.7 32.7
100 148.2 37.3 62.1 23.8
6.3. Test problem 3. Consider the nonseparable problem
with Neumann boundary conditions u conditions
on the unit cube. w was constructed so that the exact solution
is u(x, y, cos(#z). Here we compare the performance of a few
Krylov subspace solvers; thus, the focus is on the actual iterative solve time, once the
systems and the preconditioners were set up.
The results in Table 6.7 are for a 20 - 20 - 20 grid. ILU(0) was used as a
preconditioner. The stopping criterion was ||r i ||/||r 0 In all cases the reduced
solver converges faster than the unreduced solver. The factor of approximately 2 can
be a good indication for the gain in performing one step of cyclic reduction in much
finer grids. Bi-CGSTAB is slightly faster than CGS. These two schemes are faster than
BiCG. The di#erences in performance between the solvers are qualitatively similar for
the reduced and the unreduced systems.
Table
Iteration counts and solving times for three Krylov solvers.
Method Iterations Time (sec.)
Unreduced Reduced Unreduced Reduced
CGS
7. Concluding remarks. A cyclically reduced operator for a 3D convection-
di#usion equation with variable coe#cients has been derived. Block orderings have
been discussed, some solving techniques for the reduced system have been examined,
44 CHEN GREIF
and numerical experiments illustrate the fact that the reduced system is easier to
solve than the unreduced system.
The results presented in this work show that one step of cyclic reduction can
be e#ectively used as a preconditioning technique for solving the convection-di#usion
equation with variable coe#cients. The questions of parallelism and applications are
topics for further investigation.
Acknowledgments
. I am very grateful to Jim Varah for carefully reading this
manuscript and for his valuable advice. I would also like to thank Gene Golub, who
has held helpful discussions with me and has pointed out some useful references. Many
thanks to the two referees for their helpful comments and suggestions.
--R
Orderings for incomplete factorization preconditioning of nonsymmetric problems.
Iterative methods for cyclically reduced non-self-adjoint linear systems
Iterative methods for cyclically reduced non-self-adjoint linear systems II
Line iterative methods for cyclically reduced discrete convection-di#usion problems
Matrix Computations
Cyclic Reduction/Multigrid
Analysis of Cyclic Reduction for the Numerical Solution of Three-Dimensional Convection-Di#usion Equations
Block stationary methods for nonsymmetric cyclically reduced systems arising from three-dimensional elliptic equations SIAM J
Iterative solution of cyclically reduced systems arising from discretization of the three-dimensional convection di#usion equation
Don't suppress the wiggles
Numerical Solution of Convection-Di#usion Problems
Iterative Methods for Sparse Linear Systems
Matrix Iterative Analysis
Iterative Solution of Large Linear Systems
--TR
--CTR
M. Cheung , Michael K. Ng, Block-circulant preconditioners for systems arising from discretization of the three-dimensional convection-diffusion equation, Journal of Computational and Applied Mathematics, v.140 n.1-2, p.143-158, 1 March 2002 | cyclic reduction;variable coefficients;3D elliptic problems |
334051 | On the Sequence of Consecutive Powers of a Matrix in a Boolean Algebra. | In this paper we consider the sequence of consecutive powers of a matrix in a Boolean algebra. We characterize the ultimate behavior of this sequence, we study the transient part of the sequence, and we derive upper bounds for the length of this transient part. We also indicate how these results can be used in the analysis of Markov chains and in max-plus algebraic system theory for discrete event systems. | Introduction
. In this paper we consider the sequence of consecutive powers
of a matrix in a Boolean algebra. This sequence reaches a \cyclic" behavior
after a nite number of terms. Even for more complex algebraic structures, such as
the max-plus algebra (which has maximization and addition as its basic operations)
this ultimate behavior has already been studied extensively by several authors (See,
e.g., [1, 9, 13, 26] and the references therein). In this paper we completely characterize
the ultimate behavior of the sequence of the consecutive powers of a matrix in
a Boolean algebra. Furthermore, we also study the transient part of this sequence.
More specically, we give upper bounds for the length of the transient part of the
sequence as a function of structural parameters of the matrix.
Our main motivation for studying this problem lies in the max-plus-algebraic
system theory for discrete event systems. Furthermore, our results can also be used
in the analysis of the transient behavior of Markov chains.
This paper is organized as follows. In x2 we introduce some of the notations and
concepts from number theory, Boolean algebra, matrix algebra and graph theory that
will be used in the paper. In x3 we characterize the ultimate behavior of the sequence
of consecutive powers of a given matrix in a Boolean algebra, and we derive upper
bounds for the length of the transient part of this sequence. In x4 we brie
y sketch
how our results can be used in the analysis of Markov chains and in the max-plus-
algebraic system theory for discrete event systems. In this section we also explain
why we have restricted ourselves to Boolean algebras in this paper and we indicate
some of the phenomena that should be taken into account when extending our results
to more general algebraic structures. Finally we present some conclusions in x5.
2. Preliminaries.
2.1. Notation, denitions and some lemmas from number theory. In
this paper we use \vector" as a synonym for \column matrix". If a is a vector, then
a i is the ith component of a. If A is a matrix, then a ij or (A) ij is the entry on the
ith row and the jth column, and A is the submatrix of A obtained by removing all
rows that are not indexed by the set and all columns that are not indexed by the
set .
Control Laboratory, Faculty of Information Technology and Systems, Delft University of Tech-
nology, P.O. Box 5031, 2600 GA Delft, The Netherlands (b.deschutter@its.tudelft.nl).
y ESAT/SISTA, K.U.Leuven, Kardinaal Mercierlaan 94, B-3001 Heverlee (Leuven), Belgium
(bart.demoor@esat.kuleuven.ac.be).
Table
The operations
and
for the Boolean algebra (f0; 1g;
The set of the real numbers is denoted by R, the set of the nonnegative integers
by N, and the set of the positive integers by N 0 .
If S is a set, then the number of elements of S is denoted by #S. If
is a set of
positive integers then the least common multiple of the elements of
is denoted by
lcm
and the greatest common divisor of the elements of
is denoted by gcd
If x 2 R then dxe is the smallest integer that is larger than or equal to x, and bxc
is the largest integer that is less than or equal to x.
Lemma 2.1. Let p; q 2 N 0 be coprime. The smallest integer n such that for any
integer m n, there exist two nonnegative integers and such that
is given by
Proof. See, e.g., the proof of Lemma 3.5.5 of [5].
Let a 1 , a 2 , . , an 2 N 0 with an an ) to
be the largest positive integer N for which the equation a
subject to x 1 has no solution. From Lemma 2.1 it follows that
ab a b. Although a formula exists for the case where
are known for However, some upper bounds have
been proved [4, 11]:
Lemma 2.2. If a 1 ; a an 2 N 0 with a 1 < a < an and gcd(a an
an ) (a 1 1)(a n 1) 1.
Lemma 2.3. If a 1 ; a an 2 N 0 with a 1 < a < an and gcd(a an
1, then we have g(a an ) 2an 1
an
an .
2.2. Boolean algebra. A Boolean algebra is an algebraic structure of the form
such that the operations
and
applied on 0 and 1 yield
the results of Table 2.1, where
and
are associative, and
where
is distributive
with respect to . The element 0 is called the Boolean zero element, 1 is called the
Boolean identity element, is called the Boolean addition
and
is called the Boolean
multiplication.
Some examples of Boolean algebras are: (ffalse; trueg; or; and), (f0; 1g; min; +),
and so on (see
[1, 15]). In this paper we shall use the following examples of Boolean algebra in order to
transform known results from max-plus algebra and from nonnegative matrix algebra
to Boolean algebra:
1. The Boolean algebra (f1; 0g; max; +) is a subalgebra of the max-plus algebra
2. The Boolean algebra (f0; pg; +; ) where p stands for an arbitrary positive
number 1 can be considered as a Boolean restriction of nonnegative algebra.
A matrix with entries in B is called a Boolean matrix. The operations
and
are extended to matrices as follows. If then we have
CONSECUTIVE POWERS OF A BOOLEAN MATRIX 3
for all
a
ik
for all j. Note that these denitions resemble the denitions of the sum and the
product of matrices in linear algebra but with instead of
and
instead of .
The n by n Boolean identity matrix is denoted by In , the m by n Boolean zero
matrix is denoted by Omn , and the m by n matrix all the entries of which are equal
to 1 is denoted by Emn . If the dimensions of these matrices are not indicated they
should be clear from the context.
The Boolean matrix power of the matrix A 2 B nn is dened as follows:
In ; and
A
A
A
If we permute the rows or the columns of the Boolean identity matrix, we obtain a
Boolean permutation matrix. If P 2 B nn is a Boolean permutation matrix, then we
have
In . A matrix R 2 B mn is a Boolean upper triangular
2.3. Boolean algebra and graph theory. We assume that the reader is familiar
with basic concepts of graph theory such as directed graph, path, (elementary)
circuit, and so on (see, e.g., [1, 18, 27]). In this paper we shall use the denitions of [1]
since they are well suited for our proofs. Sometimes these denitions dier slightly
from the denitions adopted by other schools in the literature. The most important
dierences are:
In this paper we also consider empty paths, i.e., paths that consist of only
one vertex and have length 0. However, unless it is explicitly specied, we always
assume that paths have a nonzero length.
The precedence graph of the matrix A 2 B nn , by denoted by G(A), is a
directed graph with vertices 1, 2, . , n and an arc for each a ij 6= 0. Note that
vertex i is the end point of this arc.
A directed graph is called strongly connected if for any two dierent vertices
there exists a path from v i to v j . Note that this implies that a graph consisting
of one vertex (with or without a loop) is always strongly connected.
A matrix is irreducible if its precedence graph is strongly connected. Since
according to the denition we use a graph with only one vertex is always strongly
connected, the 1 by 1 Boolean zero matrix [ 0 ] is irreducible. However, the 1 by 1
Boolean zero matrix [ 0 ] is the only Boolean zero matrix that is irreducible.
Let us now give a graph-theoretic interpretation of the Boolean matrix power.
Let A 2 B nn and let k 2 N 0 . Recall that there is an arc only if
a iia i
a
for all since 0 is absorbing
for
equal to 1 if and only if there
exists a path of length k from vertex j to vertex i in G(A).
A maximal strongly connected subgraph (m.s.c.s.) G sub of a directed graph G is
a strongly connected subgraph that is maximal, i.e., if we add an extra vertex (and
some extra arcs) of G to G sub then G sub is no longer strongly connected.
4 B. DE SCHUTTER AND B. DE MOOR
A well-known result from matrix algebra states that any square matrix can be
transformed into a block upper diagonal matrix with irreducible blocks by simultaneously
reordering the rows and columns of the matrix (see, e.g., [1, 2, 5, 12, 17, 22]
for the proof of this theorem and for its interpretation in terms of graph theory and
Markov chains):
Theorem 2.4. If A 2 B nn then there exists a permutation matrix
such that the matrix ^
A
P T is a block upper triangular matrix of the form
A 11
A 1l
A 2l
(1)
with l 1 and where the matrices ^
A
A 22 , . ,
A ll are square and irreducible. The
A
A 22 , . ,
A ll are uniquely determined to within simultaneous permutation
of their rows and columns, but their ordering in (1) is not necessarily unique.
The form in (1) is called the Frobenius normal form of the matrix A. If A is irreducible
then there is only one block in (1) and then A is a Frobenius normal form of itself.
Each diagonal block of ^
A corresponds to an m.s.c.s. of the precedence graph of ^
A.
Theorem 2.5. If A 2 B nn is irreducible, then
A
k+c
c
A
(2)
where is equal to 1 if there exists a circuit in G(A), and equal to 0 otherwise.
Proof. See, e.g., [1, 7, 13].
The smallest c for which (2) holds is called the cyclicity [1], index of cyclicity [2] or
index of imprimitivity 2 [5, 12] of the matrix A. The cyclicity c(A) of a matrix A is
equal to the cyclicity of the precedence graph G(A) of A and can be computed as
follows. The cyclicity of a strongly connected graph or of an m.s.c.s. is the greatest
common divisor of the lengths of all the circuits of the given graph or m.s.c.s. If an
m.s.c.s. or a graph contains no circuits then its cyclicity is equal to 0 by denition.
The cyclicity of general graph is the least common multiple of the nonzero cyclicities
of the m.s.c.s.'s of the given graph.
Lemma 2.6. If A 2 B nn is irreducible then c(A) n.
Proof. Let A is irreducible, G(A) contains only one m.s.c.s.
If
From now on we assume that A 6= O. Since c is the greatest common divisor of the
lengths of the (elementary) circuits in G(A), c is maximal if there is only one circuit
and if this circuit has length n. In that case we have In the other cases, c will
be less than n.
Lemma 2.7. Let A 2 B nn be irreducible and let c be the cyclicity of A. Consider
ng. If c > 0 and if there exists a (non-empty) path of length l 1 from
j to i and a (non-empty) path of length l 2 from j to i then there exists a (possibly
negative) integer z such that l
We prefer to use the word \cyclicity" or \index of cyclicity" in this paper in order to avoid
confusion with the concept \index of primitivity" [2, 25] of a nonnegative matrix A, which is dened
to be the least positive integer
(A) such that all the entries of A
are positive.
CONSECUTIVE POWERS OF A BOOLEAN MATRIX 5
Proof. This lemma is a reformulation of Lemma 3.4.1 of [5] that states that if G
is a strongly connected directed graph with cyclicity c then for each pair of vertices j
and i of G, the lengths of the paths from j to i are congruent modulo c.
Remark 2.8. Consider A 2 B nn and ng. Let l ij be the length of
the shortest path from vertex j to vertex i in G(A). Note that Lemma 2.7 does not
imply that there exists a path of length l ij + kc from j to i for every k 2 N. 3
In the next section we discuss upper bounds for the integer k 0 that appears in Theorem
2.5. We also extend this theorem to Boolean matrices that are not irreducible.
3. Consecutive powers of a Boolean matrix. In this section we consider
the sequence
A
where A is a Boolean matrix. First we consider matrices
with a cyclicity that is equal to 0. Next we consider matrices with a cyclicity that
is larger than or equal to 1. Here we shall make a distinction between four dierent
cases depending on whether the given matrix is irreducible or not, and on whether its
cyclicity is equal to 1, or larger than or equal to 1. Of course the last case that will
be considered is the most general one, but for the other cases we can provide tighter
upper bounds on the length of the transient part of the sequence
A
k=1 and that
is why we consider four dierent cases.
If possible we also give examples of matrices for which the sequence of the consecutive
matrix powers exhibits the longest possible transient behavior.
3.1. Boolean matrices with a cyclicity that is equal to 0. Lemma 3.1.
Let A 2 B nn . If then we have
A
Proof. If the cyclicity of A is equal to 0, then there are no circuits in G(A), which
means that there do not exist paths in G(A) with a length that is larger than or equal
to n since in such paths at least one vertex would appear twice, which implies that
such paths contain a circuit. Therefore, we have
A
O for all k n.
Example 3.2. If there exists a permutation matrix P such that A 2 B nn can be
written as
A
then the upper bound of Lemma 3.1 is tight, i.e., we have
A
A
O for all k n. The graph of the matrix ^
A is represented
in Figure 3.1. Note that
contains no circuits and
since the transformation from A to ^
A corresponds to a simultaneous reordering of the
rows and the columns of A (or of the vertices of G(A)). 3
From now on we only consider matrices with a cyclicity that is larger than or equal
to 1.
3.2. Boolean matrices with cyclicity 1. Theorem 3.3. Let A 2 B nn . If
the cyclicity of A is equal to 1 and if A is irreducible, then we have
A
A
Enn for all k (n
Proof. This theorem can be considered as the Boolean equivalent of Theorem 4.14
of [2] or of Theorem 3.5.6 of [5]. Note that A cannot be equal to [ 0 ] since
6 B. DE SCHUTTER AND B. DE MOOR
Fig. 3.1. The precedence graph of the matrix ^
A of Example 3.2.
If more information about the structure of A is known (such as the number of diagonal
entries that are equal to 1, the length of the shortest elementary circuit of G(A), or
whether A is symmetrically nonnegative) other upper bounds for the length of the
transient part of the sequence
A
where A is a Boolean matrix with cyclicity
1 can be found in x2.4 of [2].
Example 3.4. If there exists a permutation matrix P such can be written
as
A
. 0
then the bound in Theorem 3.3 is tight: we have
A
A
6= 1. Let us now show that the latter part of this statement indeed holds.
Since the transformation from A to ^
A
corresponds to a simultaneous
reordering of the rows and the columns of A, we may assume without loss of generality
that P is the identity matrix.
A. If then we have (n 2.
Since
and
we indeed have
A
2.
From now on we assume that n > 2. In Figure 3.2 we have drawn G(A). There are
two elementary circuits in G(A): circuit of length
n and circuit length n 1. Note that only the
longest circuit passes through vertex n. Furthermore, if n > 2 then gcd(n
Any circuit that passes through vertex n can be considered as a concatenation of
times C 1 , a path from vertex n to a vertex t in C 2 , times C 2 , and a path from t
to n for some nonnegative integers and . The length of this circuit is equal to
2.1 the smallest integer N such that for any integer
CONSECUTIVE POWERS OF A BOOLEAN MATRIX 7n
Fig. 3.2. The precedence graph of the matrix ^
A of Example 3.4.
N there exist nonnegative integers and such that
given by This implies that (n 1)(n cannot be written
as
This implies that there does not exist a circuit
of length that passes through vertex n. Hence,
A
nn
A
A
P T is the Frobenius normal form of A, then we have
A
P . Hence,
A
A
A
for all k 2 N. Therefore, we may consider without loss of generality the sequence
A
instead of the sequence
A
. Furthermore, since the transformation
from A to ^
A corresponds to a simultaneous reordering of the rows and columns of A
(or to a reordering of the vertices of G(A)), we have
A).
Theorem 3.5. Let ^
be a matrix of the form (1) where the matrices
A
A 22 , . ,
A ll are irreducible and such that c( ^
such that ^
A ij for all
for all
Let
8 B. DE SCHUTTER AND B. DE MOOR
for all we have for all
A
On
for all k
For all
A
On
for all k
Remark 3.6. Note that ^
A ij is an n i by n j matrix for all
Let us now give a graphical interpretation of the sets S ij and ij .
Let C i be the m.s.c.s. of G( ^
that corresponds to ^
A ii for is the
set of vertices of C i .
If there exists a path from a vertex in C i r
to a vertex in C i r 1
for each each m.s.c.s. C i of G( ^
either is
strongly connected or consists of only one vertex, this implies that there exists a path
from a vertex in C j to a vertex in C i that passes through C i s 1
there does not exist any path from a vertex in C j to a vertex in C i .
The set ij is the set of indices of the m.s.c.s.'s of G( ^
through which some path
from a vertex of C j to a vertex of C i passes.
is the smallest m.s.c.s. of G(A) that contains a circuit and through
which some path from a vertex of C j to a vertex of C i passes 3 . 3
Proof. Proof of Theorem 3.5.
Let C i be the m.s.c.s. of G( ^
that corresponds to ^
A ii for
O if i > j, there are no arcs from any vertex of C j to a vertex in C i . As a consequence,
(5) holds if i > j.
Note that c( ^
A ii
since
each
A ii corresponds to an m.s.c.s. of G( ^
A).
A is irreducible and then (4) holds by Theorem 3.3. It is easy to verify
that (4) holds if
From now on we assume that l > 1 and i < j.
there does not exist a path from a vertex in C j to a vertex in C i .
A
O for all k 2 N.
. So there exist paths
from a vertex in C j to a vertex in C i , but each path passes only through m.s.c.s.'s
that consist of one vertex and contain no loop. Such a path passes through at most
3 Or more precisely:
belongs to the set of the smallest m.s.c.s.'s of G(A) that
contain a circuit and through which some path from a vertex of C j to a vertex of C i passes.
CONSECUTIVE POWERS OF A BOOLEAN MATRIX 9
s
r
s
r r
r
r
r
r
Fig. 3.3. Illustration of the proof of Theorem 3.5. There exists a path from vertex us of m.s.c.s.
C j to vertex v 0 of m.s.c.s. C i that passes through the m.s.c.s.'s C
ij of such m.s.c.s.'s (C j and C i included). This implies that there does not exist
a path with a length that is larger than or equal to # ij from a vertex in C j to a
vertex in C i . Hence, we have
A
O for all k # ij .
From now on we assume that l > 1, i there exists
a set fi and there exist indices
for
A ur vr+1 6= 0 for each r. So there exists an arc from vertex v r+1 of C i r+1
to vertex u r
of
for each r 2 f0; 1g. Select an arbitrary vertex u s of C i s
and an
arbitrary vertex v 0 of C i 0
Recall that the only Boolean zero matrix that is irreducible is the 1 by 1 Boolean zero
Now we distinguish between two cases:
. So in this case we could
say that there exists an empty path of length l r = 0 from vertex u r to vertex v r of
On the other hand, if
there exists a (possibly empty) path
of length l r n i r
1 from vertex u r to vertex v r of C i r
since
strongly
connected. If u r then this path is empty and has length 0.
So for each r 2 f0; there exists a (possible empty) path of length l r n i rfrom vertex u r to vertex v r of C i r
Clearly, we have ^
is irreducible and since
it follows from Theorem 3.3 that there exists a path of length k from
vertex
r to vertex v ~ r of C i ~ r
for any k k i ~ r i ~ r
s
r 6=~r
So if we have an integer k k ij then we can decompose it as
with ~ k 2 N. By Theorem 3.3 there exists a path of length k ~ t ~ t from u ~
r to v ~ r in
r
for each ~ k 2 N. This implies that there exists a path from vertex u s to vertex
v 0 of length k in G( ^
A). This path consists of the concatenation of paths of length l r
from vertex u r to vertex v r of C i r
for
r paths of length
1 from vertex v r+1 of C i r+1
to vertex u r of C i r
for a path of
length from vertex u ~ r to vertex v ~
r of C i ~ r
(See
Figure
3.3). This implies that
A
us is an arbitrary vertex of C j and since v 0 is an
arbitrary vertex of C i , this implies that ^
A
Example 3.7. Consider the following matrix:
This matrix is in Frobenius normal form and its block structure is indicated by the
vertical and horizontal lines. The precedence graph of ^
A is represented in Figure 3.4.
Using the notations and denitions of Theorem 3.5, we have
A 11 =4
A 12 =4 015 and ^
Furthermore,
We have
Note that ^
A 11 is a matrix of the form (3). So the smallest K 11 for which ^
A
k K 11 is equal to k 5. Furthermore, the smallest K 12 such that ^
A
CONSECUTIVE POWERS OF A BOOLEAN MATRIX 1131
Fig. 3.4. The precedence graph of the matrix ^
A of Example 3.7.
for all k K 12 is equal to k
A of this example all the bounds
ij that appear in Theorem 3.5 are tight.
It is easy to verify that for a matrix of the
all the bounds k ij that appear in Theorem 3.5 are tight. 3
Lemma 3.8. Let A 2 B nn with 1. Then we have
A
A
for all
Proof. If A is irreducible, then we have
A
A
k for all k (n
Theorem 3.3.
So from now on we assume that A is not irreducible. Let ^
A
P T be the
Frobenius normal form of A. Assume that ^
A is of the form (1) where the ^
A ii 's are
square and irreducible. Let the numbers ij , n i and the sets i and ij be
dened as in Theorem 3.5.
We have k
Let us now prove that k ij (n
We have n (n
have
lg and since l 1, we have
l
t6=s
l
t6=s
12 B. DE SCHUTTER AND B. DE MOOR
Hence,
l
l
t6=s
then we have
l
l
l
l
l
l
l
l
t6=s
l
l
t6=s
l
As a consequence, it follows from
Theorem 3.5 that ^
A
A
for all k (n
A
A
k for all k (n 1) 2 +1. Since
A
A
this implies that
A
A
for all k (n
3.3. Boolean matrices with a cyclicity that is larger than or equal to
1. Lemma 3.9. Let A 2 B nn be an irreducible matrix with c(A) 2 and let
ng. Then there exists a (possibly empty) path P ij from j to i in G(A)
CONSECUTIVE POWERS OF A BOOLEAN MATRIX 13
that passes through at least one vertex of each (elementary) circuit of G(A) and that
has a length that is less than or equal to n 2 1
.
Proof. Since the cyclicity of A is larger than or equal to 2, there are no loops in
G(A). Hence, A contains at least one circuit. Since A is irreducible, this implies that
j has to belong to an elementary circuit of G(A). Since the length of any elementary
circuit of G(A) is larger than or equal to 2, there exists a set
ng with m
n such that any (elementary) circuit of G(A) contains at
least one vertex that belongs to S. Dene i strongly connected
there exists a (possibly empty) path P k with length l k n 1 from vertex i k to vertex
mg. Let l k be the length of P k . There exists a path P ij
from j to i that contains at least one vertex of each (elementary) circuit of G(A): this
path consists of the concatenation of P 1 , P 2 , . , Pm . If l ij is the length of P ij , then
we have
(n
Remark 3.10. Note that we could have derived an upper bound that is more tight in
Lemma 3.9. The upper bound of Lemma 3.9 will be used in the proof of Theorem 3.11.
However, in that proof we shall also use Lemmas 2.2 and 2.3 which also yield upper
bounds, and therefore we do not rene the upper bound of Lemma 3.9. 3
Theorem 3.11. Let A 2 B nn be irreducible and let
(n
c
then we have
A
k+c
A
and
A
A
A
Proof. From Theorem 3.3 it follows that (8) holds if c is equal to 1. Furthermore,
if the rst part of (8) holds, then the second part also holds since A is irreducible.
From now on we assume that c > 1. Let ng.
Cm be the elementary circuits of G(A). Let l i be the length of C i for
is irreducible, we have
exist positive integers w 1 , w 2 , . , wm such that w and such that
First we consider the case where there is only one elementary circuit or where all the
elementary circuits have the same length. Hence, Since A is irreducible, both
have to belong to some elementary circuit. We may assume without loss of
generality that j belongs to C 1 . Since A is irreducible there exist paths from vertex
j to vertex i of G(A). Let P ij be the shortest (possibly empty) path from j to i and
ij be the length of this path. We have l ij n 1 (Note that l
to j. For any integer k 2 N there exists a path of length l ij + kc from j to i: this
path consists of k times C 1 followed by P ij . Hence
A
l 2 N with l n 1. Now there are two possibilities. If l can be written as
for some k 2 N, then we have
A
l
1. If l cannot be written as
any k 2 N, then it follows from Lemma 2.7 that there does not exist a path from j to
14 B. DE SCHUTTER AND B. DE MOOR
i and then we have
A
l
This implies that (8) holds if all the elementary circuits of G(A) have the same length.
From now on we assume that there exist at least two elementary circuits in G(A) that
have dierent lengths.
Since A is irreducible it follows from Lemma 3.9 that there exists a (possibly empty)
path P ij from vertex j to vertex i of G(A) with length l ij n 2 1
2 that passes through
at least one vertex of each elementary circuit of G(A). For each circuit C k we select
one vertex v k that belongs to the path P ij . Let l be an integer that can be written
as
exist nonnegative integers 1 , 2 , . , m such that
As a consequence, we have
there exists a path of length l from j to i: this path consists of the concatenation
of times the circuit C k (v
A
Let us now determine an upper bound for g(w
Therefore, we may assume without loss of generality that all the w i 's are dierent and
thus also that w 1 < w
Since there are at least two elementary circuits in G(A) that have dierent lengths, we
have m 2. We have w
c n
c for all k. Hence, wm n
c and w 1 n
2. As a consequence, we have
c
c
c
If we dene
n
c
c
c
then we have l ij
c K. So if we have an integer l that is
larger than K then it can either be written as
and then
A
l
l cannot be written as pc for any p 2 N and then it
follows from Lemma 2.7 that there does not exist a path of length l from j to i, i.e.,
A
l
that K k n;c .
Hence, (8) also holds in this case.
Remark 3.12. In the proof of Theorem 3.11 we could also have used Lemma 2.3
to determine an upper bound for g(w thus also
wm 2. Furthermore, wm 1 n
c 1. Hence,
wm
CONSECUTIVE POWERS OF A BOOLEAN MATRIX 15
c
c
In the second part of the proof of Theorem 3.11 we have c 2. Since A is irreducible,
it follows from Lemma 2.6 that c n. Hence, 1 n
c n
2 . It is easy to verify that
the upper bound of (10) is less than the upper bound of (9) if n
c < 3
2 . However, if
c < 3
, then we would have wm 1 1
2 , which is not possible. This implies that for
combinations of n and c for which there are at least two elementary circuits in G(A)
with dierent lengths, the upper bound of (9) is less than or equal to the upper bound
of (10). 3
The Boolean sum of sequences is dened as follows. Consider sequences
k. The sequence
is dened by
Lemma 3.13. Consider sequences
B for all i; k. Suppose that for each i 2 there exist integers K
such that
If
then the sequence
Proof. Note that (11) implies that
First we assume that there exists an index i 2 mg such that c
1. Then we have (g i
From now on we assume that there does not exist any index i 2
that c
1.
exist positive integers w wm such that
m. Consider an integer k K. We have
Example 3.14. Consider the sequences
If we use the notation of Lemma 3.13 then we have c
Hence, We have
It is easy to verify that g k+6 = g k for all k 1. 3
Theorem 3.15. Let ^
be a matrix of the form (1) where the matrices
A
A 22 , . ,
A ll are irreducible. Dene sets 1 , 2 , . , l such that ^
for all
A ii ) for all i. Dene:
for all
Let
is dened as in (7)
with kn i
For each for each
c
if
lcmfc
for each
c
CONSECUTIVE POWERS OF A BOOLEAN MATRIX 17
for all
Then we have for all
A
A
for all k
and
A
A
A
On
for all k k ij .
For all
A
On
for all k
Proof. Let C i be the m.s.c.s. of G( ^
that corresponds to ^
A ii for
Let lg. In the proof of Theorem 3.15 is has already been proved that
holds if i > j, and that (13) and (14) hold if i
hold by Theorem 3.11 if
So from now on we assume that i <
that l > 1.
Select an arbitrary vertex u of C i and an arbitrary vertex v of C j .
there exists at least one set
which
ij is well dened. Note that k ij
there
do not exist paths from v to u of length n # ij that correspond to
. So from now
on we only consider sets
which
Let
we assume that
for at least one index i r 2
. Assume that
is given by
s g. Dene
ur vr+1 6= 0 for
g. Let P(
be the set of paths from v to u that pass through m.s.c.s. C i r
for
that enter C i r
at vertex u r for that exit from C i r
through
vertex v r for 3.3). Let the sequences f(g
and f(g
k=1 be dened by
(g
1 if there exists a path of length k that belongs to P(
(g
1 if there exists a path of length k that belongs to P(
for some pair (U;
g. Let us now show that the sequences f(g
k=1 and
f(g
(g
(g
(g
(g
Note that if (16) and (17) hold for each pair (U;
Therefore, we now show that (16) and (17) hold. Dene ~ u
for
s. We consider three cases:
Case A: c j ~
r
In this case we have c
1.
l r be the length of the shortest (possibly empty) path from vertex ~
u r to
vertex ~ v r of C jr for each r 2 f0; We have l r n jr 1 for each r.
r
is irreducible and since c j ~
r
it follows from Theorem 3.3 that
for any integer p (n j ~ r
exists a path of length p from vertex
~
r to vertex ~
r of C j ~ r
. If we also take into account that there are s arcs of
the form v r+1 ! u r for that for any
r 6=~r
there exists a path of length k that belongs to P(
us now show
that KA k ij . Let r 2 f0; it follows from the
denition of kn j r ;c j r
that
. Furthermore, if c
have
. Since c j ~ r
r
. Furthermore,
which
implies that (16) and (17) hold in this case.
Case B: c
and c t 6= 1 for all t 2
.
Assume that c j ~ r
with ~ r 2 f0;
sg.
l r be the length of the shortest (possibly empty) path from vertex ~
u r to
vertex ~
v r of C jr for each r 2 f0;
r
each r 6= ~ r. From Lemma 2.7 and from the proof of Theorem 3.11 it follows
that there exists an integer K ~
r with kn j ~
r
r
r
1 such
that there exist paths of length K ~ r
from vertex ~
r to vertex ~ v ~ r of C j ~ r
for any p 2 N, while there do not exist paths of length K ~
q from ~
r
to ~ v ~
r for any p 2 N and any q 2
r
1g. So if we dene
r 6=~r
r
then it follows from Lemma 2.7 that for any k KB either there exists a
path of length k
that belongs to P(
N, or there do
not exist paths of length k
that belong to P(
It
is easy to verify that KB k ij . Hence, (16) and (17) also hold in this case.
Case C: c jr 6= 1 for all r 2 f0;
for some a; b 2 f0;
From Lemma 2.7 and from the proof of Theorem 3.11 it follows that for each
there exists an integer K r with kn j r ;c j r
c jr 1 such that there exist paths of length K r c jr from ~ u r to ~ v r for each
while there do not exist paths of length K r +p c jr +q from ~ u r to ~
v r for
any p 2 N and for any q 2 1g. This implies that there exist
CONSECUTIVE POWERS OF A BOOLEAN MATRIX 19
paths of length K 0
that belong to
P(
each choice of for each r. Dene
there exist positive integers
s such that c
for each r 2 f0; and such that
1. So for any integer q g(w
exist nonnegative integers
s such that
s .
we have w a 6= w b . Therefore, we may assume without loss
of generality that w 0 < w 2. We have
c
s
c
c
c
thus 2. Using a reasoning that is similar to the one used in the proof
of Theorem 3.11 and Remark 3.12 we can show that
c
c
So if we dene
c
c
then we have KC K
. Let k 2 N with k KC .
Now there are two possibilities:
If k can be written as K
with q g(w
we have
which implies that there exists a path of length k that belongs to P(
On the other hand, if k cannot be written as K
for any q 2 N
then it follows from Lemma 2.7 that there does not exist a path of length k
that belongs to P(
this implies that (16) and (17) also hold in this case.
If we consider all possible paths from vertex v to vertex u of length k 2 N with
then each of these paths corresponds to some set
A
is equal to 1 if and only if there exists a path of length k from v to u,
we have
A
uv
(g
Note that if c
then we have (g
1. Since each
sequence f(g
it follows from Lemma 3.13 that
A
uv
A
uv
for all k
Furthermore, since each sequence f(g
k=1 satises (19), we have
A
A
A
uv
20 B. DE SCHUTTER AND B. DE MOOR
Remark 3.16. Note that if
2 then we do not have to consider
1 when we are determining S ij . Hence, we could have dened S ij as the set of
maximal subsets fi of f1;
Let us now give an example in which the various sets and indices that appear in the
formulation of Theorem 3.15 are illustrated.
Example 3.17. Consider the matrix
This matrix is in Frobenius normal form and its block structure is indicated by the
horizontal and vertical lines. The precedence graph of A is represented in Figure 3.5.
We have
A
CONSECUTIVE POWERS OF A BOOLEAN MATRIX 2131562C
Fig. 3.5. The precedence graph G(A) of the matrix A of Example 3.17. The subgraphs C 1 , C2 ,
C 3 and C 4 are the m.s.c.s.'s of G(A).
A
A
k for all k 2.
We have
us now look at the sequence
A
. We have
with
and c
3. We have
and r
Note that we indeed have
A
A
14 for all k 9. 3
Lemma 3.18. Consider m positive integers c 1 , c 2 , . , c m . Let
c m ). Consider r non-empty subsets 1 , 2 , . , r of f1; mg. Dene d
for each i. If is a divisor of c.
Proof. We may assume without loss of generality that c i 6= c j for all
If d is a divisor of lcm(c also is a divisor of lcm(c
Therefore, we may assume without loss of generality that 1 [
mg. If i j then d j is a divisor of d i and then lcm(d
which implies that j is redundant and may be
removed. If d redundant and may be removed. It is easy to
22 B. DE SCHUTTER AND B. DE MOOR
verify that if we remove all redundant sets, then the resulting number of sets i is
less than or equal to m (The worst cases being when
or when Hence, we may assume without
loss of generality that r m and that d i 6= d j for all
can select indices l 1 , l 2 , . , l r such that l i 2 i for
is a divisor of c l i
for each i.
Since all c i 's are dierent we have
We also have
Since d i is a divisor of c l i
there exist integers w i 2 N such that
c l i
then we
have
Y
d
where
Y
c i is equal to 1 by denition. So
implies that d is a divisor of c.
Lemma 3.19. Let n 2 N and let c 2 f0; ng. If k n;c is dened by (7) if
then we have k n;c (n
Proof. It is obvious that the lemma holds if 1. So from now on we
assume that c > 1.
us now show that f(c) (n
We have df
2. So f reaches a local minimum in
pand is decreasing if
pand increasing if c > n
pLet us rst consider the cases where n is equal to 2 or to 3. If then we have
does
not belong to the interval [2; 3] and then the maximal value of f in [2; 3] is equal to
From now on we assume that n is larger than or equal to 4. If n 4, then n
pbelongs
to the interval [2; n] and then the maximal value of f in [2; n] is equal to
CONSECUTIVE POWERS OF A BOOLEAN MATRIX 23
Hence, k n;c (n
Theorem 3.20. Let A 2 B nn and let c be the cyclicity of A. We have
A
A
for all k 2n 2.
Proof. Let
A
P T be the Frobenius normal from of A where P is a
permutation matrix. Assume that ^
A is a matrix of the form (1) with ^
A ii irreducible
dened as in Theorem 3.15. Let us now show that k ij 2(n
It is easy to verify that this holds if from now on
we assume that ij 6= 0. Hence,
c
Since we have kn i ;c i
from the proof of Lemma 3.8 that
Let us now show that
c
for each
;. From Lemma 2.6 it follows that
c t n t for each t 2
. Hence, 1 c
2c. From the proof of Lemma 3.19 it follows that f is
decreasing if c < nr ij
pand increasing if c > nr ij
then the maximum value of f in the interval [1;
is equal to
2. Furthermore, c ij is a divisor of c by
Lemma 3.18. Hence, ^
A
A
k for all k 2n 2 3n+2. Since,
A
A
for all k 2 N, this implies that
A
k+c
A
for all k 2n 2.
4. Applications and extensions.
4.1. Markov chains. It is often possible to represent the behavior of a physical
system by describing all the dierent states the system can occupy and by specifying
how the system moves from one state to another at each time step. If the state space
of the system is discrete and if the future evolution of the system only depends on the
current state of the system and not on past history, the system may be represented by
a Markov chain. Markov chains can be used to describe a wide variety of systems and
phenomena in domains such as diusion processes, genetics, learning theory, sociology,
economics, and so on [22].
A nite homogeneous Markov chain is a stochastic process with a nite number of
states s 1 , s 2 , . , s n where the transition probability to go from one state to another
state only depends on the current state and is independent of the time step. We
dene an n by n matrix P such that p ij is equal to the probability that the next
state is s i given that the current state is s j . Note that
a sequence of vectors
(k)k=0 with
i is the probability
that the system is in state s i at time step k. If the initial probability vector (0) is
given, the evolution of the system is described by
Hence,
So if we consider the Boolean algebra (f0; pg; +; ) where p stands for an arbitrary
positive number and if we dene a matrix ~
nn such that ~
and ~
then we can give the following interpretation to the Boolean
matrix power ~
. We can go from state s j to state s i in k steps if and only if
equivalently if ~
p. As a consequence, the results of this paper
can also be used to obtain upper bounds for the length of the transient behavior of a
nite homogeneous Markov chain.
For more information on Markov chains and their applications the interested
reader is referred to [2, 12, 20, 22] and the references therein.
4.2. Max-plus algebra. Our main motivation for studying sequences of consecutive
powers of a matrix in a Boolean algebra lies in the max-plus-algebraic system
theory for discrete event systems. Typical examples of discrete event systems are
exi-
ble manufacturing systems, telecommunication networks, parallel processing systems,
trac control systems and logistic systems. The class of the discrete event systems
essentially consists of man-made systems that contain a nite number of resources
(e.g., machines, communications channels or processors) that are shared by several
users (e.g., product types, information packets or jobs) all of which contribute to the
achievement of some common goal (e.g., the assembly of products, the end-to-end
transmission of a set of information packets, or a parallel computation).
There are many modeling and analysis techniques for discrete event systems, such
as queuing theory, (extended) state machines, max-plus algebra, formal languages,
automata, temporal logic, generalized semi-Markov processes, Petri nets, perturbation
analysis, computer simulation and so on (See [1, 6, 19, 23, 24] and the references cited
therein). In general models that describe the behavior of a discrete event system are
nonlinear in conventional algebra. However, there is a class of discrete event systems
{ the max-linear discrete event systems { that can be described by a model that is
CONSECUTIVE POWERS OF A BOOLEAN MATRIX 25
\linear" in the max-plus algebra [1, 7, 8]. The model of a max-linear discrete event
system can be characterized by a triple of matrices (A; B; C), which are called the
system matrices of the model.
One of the open problems in the max-plus-algebraic system theory is the minimal
realization problem, which consists in determining the system matrices of the
model of a max-linear discrete event system starting from its \impulse response 4 "
such that the dimensions of the system matrices are as small as possible (See [1] for
more information). In order to tackle the general minimal realization problem it is
useful to rst study a simplied version: the Boolean minimal realization problem, in
which only models with Boolean system matrices are considered. The results of this
paper on the length of the transient part of the sequence of consecutive powers of a
matrix in a Boolean algebra can be used to obtain some results for the Boolean minimal
realization problem in the max-plus-algebraic system theory for discrete event
systems [10]: they can be used to obtain a lower bound for the minimal system order
(i.e., the smallest possible size of the system matrix A) and to prove that the Boolean
minimal realization problem in the max-plus algebra is decidable (and can be solved
in a time that is bounded from above by a function that is exponential in the minimal
system order).
Both Boolean algebras and the max-plus algebra are special cases of a dioid (i.e.,
an idempotent semiring) [1, 16]. For applications of dioids in graph theory, generating
languages and automata theory the interested reader is referred to [14, 15, 16].
4.3. Extensions. In this paper we have restricted ourselves to Boolean algebras.
In this section we give some examples that illustrate some of the phenomena that could
occur when we want to extend our results to more general algebraic structures. In
our examples we shall use the max-plus algebra (R [ f1g;max; +), but for other
extensions of Boolean algebras similar examples can be constructed.
In contrast to Boolean algebra (cf. Theorem 3.20) the sequence of consecutive
powers of a matrix in a more general algebraic structure does not always reach a
stationary or cyclic regime after a nite number of terms as is shown by the following
example.
Example 4.1. Consider the matrix
Since the kth max-plus-algebraic power of A is given by
A
for k 2 N 0 , the sequence
A
does not reach a stationary or cyclic regime in a
nite number of steps. 3
Note that the matrix A of Example 4.1 is not irreducible. However, if a matrix is
irreducible then it can be shown [1, 7, 13] that the sequence of consecutive max-plus-
algebraic powers of the given matrix always reaches a cyclic regime of the form (2)
after a nite number of terms. However, even if the sequence of consecutive powers
reaches a stationary regime then in general the length of the transient part will not
4 This is the output of the system when a certain standardized input sequence is applied to the
system.
26 B. DE SCHUTTER AND B. DE MOOR
only depend on the size and the cyclicity of the matrix but also on the range and the
resolution (i.e., on the size of the representation) of the nite elements of the matrix
as is shown by the following examples.
Example 4.2. Let N 2 N and consider
The matrix A(N) is irreducible and has cyclicity 1 and its -value 5 is equal to 0. The
kth max-plus-algebraic power of A(N) is given by
max( k; N) N
for each k 2 N 0 . This implies that the smallest integer k 0 for which (2) holds, is given
by depends on the range of the nite entries of A(N ). 3
Example 4.3. Let " > 0 and consider the matrix
This matrix is irreducible, has cyclicity 1 and its -value is equal to 0. Since the kth
max-plus-algebraic power of A(") is given by
the smallest integer k 0 for which (2) holds, is k
. So this example | which has
been inspired by the example on p. 152 of [1] | shows that in general the length of
the transient part of the sequence
A
depends on the resolution of the nite
entries of A. 3
5. Conclusions. In this paper we have studied the ultimate behavior of the
sequence of consecutive powers of a matrix in a Boolean algebra, and we have derived
some upper bounds for the length of the transient part of this sequence. The results
that have been derived in this paper can be used in the analysis of the transient
behavior of Markov chains and in the max-plus-algebraic system theory for discrete
event systems.
Topics for future research are the derivation of tighter upper bounds for the length
of the transient part of the sequence of consecutive power of a matrix in a Boolean
algebra, and extension of our results to more general algebraic structures such as the
max-plus algebra.
Acknowledgments
. The authors want to thank the anonymous reviewers for
their useful comments and remarks, and for pointing out the connection with Markov
chains.
Bart De Moor is a research associate with the F.W.O. (Fund for Scientic Research
{ Flanders). This research was sponsored by the Concerted Action Project of the
Flemish Community, entitled \Model-based Information Processing Systems" (GOA-
MIPS), by the Belgian program on interuniversity attraction poles (IUAP P4-02 and
IUAP P4-24), by the ALAPEDES project of the European Community Training and
Mobility of Researchers Program.
5 For methods to compute the number that appears in Theorem 2.5 for a max-plus-algebraic
matrix the reader is referred to [1, 3, 7, 21].
CONSECUTIVE POWERS OF A BOOLEAN MATRIX 27
--R
Synchronization and Linearity
Nonnegative Matrices in the Mathematical Sciences
The power algorithm in max algebra
On a problem of partitions
Combinatorial Matrix Theory
Introduction to the modelling
A linear-system-theoretic view of discrete-event processes and its use for performance evaluation in manufacturing
On the boolean minimal realization problem in the max-plus algebra
The Theory of Matrices
On rational series in one variable over certain dioids
Eigenvalues and eigenvectors in semimodules and their interpretation in graph theory
A graph theoretic approach to matrix inversion by partitioning
Discrete Event Dynamic Systems: Analyzing Complexity and Performance in the Modern World
Theory and Applications
A characterization of the minimum cycle mean in a digraph
Petri Net Theory and the Modeling of Systems
Matrix Iterative Analysis
On periodicity analysis and eigen-problem of matrix in max- algebra
Introduction to graph theory
--TR
--CTR
Martin Kutz, The complexity of Boolean matrix root computation, Theoretical Computer Science, v.325 n.3, p.373-390, 6 October 2004
Vincent J. Carey, Ontology concepts and tools for statistical genomics, Journal of Multivariate Analysis, v.90 n.1, p.213-228, July 2004 | transient behavior;max-plus algebra;boolean matrices;boolean algebra;markov chains |
334141 | The Compactness of Interval Routing. | The compactness of a graph measures the space complexity of its shortest path routing tables. Each outgoing edge of a node x is assigned a (pairwise disjoint) set of addresses, such that the unique outgoing edge containing the address of a node y is the first edge of a shortest path from x to y. The complexity measure used in the context of interval routing is the minimum number of intervals of consecutive addresses needed to represent each such set, minimized over all possible choices of addresses and all choices of shortest paths. This paper establishes asymptotically tight bounds of n/4 on the compactness of an n-node graph. More specifically, it is shown that every n-node graph has compactness at most n/4+o(n), and conversely, there exists an n-node graph whose compactness is n/4 - o(n). Both bounds improve upon known results. preliminary version of the lower bound has been partially published in Proceedings of the 22nd International Symposium on Mathematical Foundations of Computer Science, Lecture Notes in Comput. Sci. 1300, pp. 259--268, 1997.) | Introduction
An interval routing scheme is a way of implenting routing schemes on arbitrary
networks. It is based on representing the routing table stored at each node in a
compact manner, by grouping the set of destination addresses that use the same
output port into intervals of consecutive addresses. A possible way of representing
such a scheme is to use a connected undirected labeled graph, providing the
underlying topology of the network. The addresses are assigned to the nodes,
and the sets of destination addresses are assigned to each endpoint of the edges.
As originally introduced in [17], the scheme required each set of destinations to
consist of a single interval. This scheme was subsequently generalized in [18] to
allow more than one interval per edge.
Formally, consider an undirected n-node graph E). Since G is undi-
rected, each edge fu; vg 2 E between u and v can be viewed as two arcs, i.e., two
ordered pairs, (u; v) and (v; u). The graph G is said to support an interval routing
scheme (IRS for short) if there exists a labeling L of V , which labels every node
by a unique integer taken from labeling I of the outgoing edges,
which labels every exit endpoint of each arc of E by a subset of
that between any pair of nodes x 6= y there exists a path
satisfying that L(y) 2 I(u 1g. The resulting
routing scheme, denoted is called a k-interval routing scheme (k-IRS
for short) if for every arc (u; v), the collection of labels I(u; v) assigned to it is
composed of at most k intervals of consecutive integers (1 and n being considered
as consecutive).
The standard definition of k-IRS assumes a single routing path between any
two nodes. It therefore forces any two incident arcs e 6= e 0 to have disjoint labels,
Here we assume that a given destination may belong
to many labels of different arcs incident to a same node. This freedom allows
us to implement some adaptive routing schemes, and code for example the full
shortest path information, as does the boolean routing scheme [4]. Our upper
and lower bounds apply also to the recent extension of interval routing known as
multi-dimensional interval routing [3].
To measure the space efficiency of a given IRS, we use the compactness mea-
sure, defined as follows. The compactness of a graph G, denoted by IRS(G), is
the smallest integer k such that G supports a k-IRS of single shortest paths, that
is, a k-IRS that provides only one shortest path between any pair of nodes.
If the degree of every node in G is bounded by d, then a k-IRS for G is
required to store at most O(dk log n) bits of information per node (as each set
I(e) can be coded using 2k log n bits 2 ), and O(km log n) bits in total, where m
2 A more accurate coding allows to use only O(dk log (n=k)) bits per node, cf. [7].
is the total number of edges of the graph. The compactness of a graph is an
important parameter for the general study of the compact routing, whose goal is
to design distributed routing algorithms with space-efficient data structures for
each router.
Figure
1 shows an example of a 2-IRS on a graph G. For instance, arc (7; 1) is
assigned two intervals: I(7; 5g. Whereas it is quite easy to verify that
this labeling is a single shortest path for G, it is more difficult to check whether
G has compactness 1. Actually, in [9] it is shown that 2. Recently,
it was proven in [1] that for general graphs, the problem of deciding whether
[3,5]
[1][5]
[2,3]
[2,3] [6,7]
[1,2][5]
[3,4][6]
Figure
1: A 2-IRS for a graph G.
The compactness of many graph classes has been studied. Its value is 1
for trees [17], outerplanar graphs [6], hypercubes and meshes [18], r-partite
graphs [12], interval graphs [16], and unit-circular graphs [5]. It is 2 for tori [18],
at most 3 for 2-trees [16], and at most 2
n for chordal rings on n nodes [15]
(see [7] for a survey of recent State-of-the-Art). Finally, it has been proved that
compactness \Theta(n) might be required [9].
The next section presents the results of the paper. In Section 3 we prove
that are always sufficient, and in Section 4 that n=4 \Gamma o(n)
intervals might be required. We conclude in Section 5.
2 The Results
Clearly, the compactness of a graph cannot exceed n=2, since any set I(e) ae
ng containing more than n=2 integers must contain at least two consecutive
integers, which can be merged into a same interval. On the other hand
it has been proved in [9] that for every n - 1 there exists a n-node graph of
compactness at least n=12, and n=8 for every n power of 2.
In this paper we close this gap, by showing that n=4 is asymptotically a tight
bound for the compactness of n-node graphs. More specifically:
Theorem 1 Every n-node graph G satisfies
Theorem 2 For every sufficiently large integer n, there exists an n-node graph
G such that
Moreover, G has diameter 2, maximum degree at most n=2, and fewer than
edges, and every single k-IRS on G with k ! IRS(G) contains
some routing path of length at least 3.
We later show that both the upper and the lower bounds hold even if the
single and/or shortest path assumptions are relaxed.
Theorem 1 improved directly the results of [5, Theorem 11], of [3, Theorem 2],
and also a result of [2, Theorem 9].
The lower bound is proved using Kolmogorov complexity. As a result, only
the existence of such a worst-case graph G can be proved. Moreover, the bound
gives an asymptotic bound since the Kolmogorov complexity is defined up to a
constant. This is in contrast to the technique of [9], which gave explicit recursive
constructions of worst-case graphs of compactness n=12, for every n - 1.
3 The Upper Bound
The basic idea for the upper bound, and partially for the lower bound, is to give
a boolean matrix representation M(R) for a given k-IRS on a graph
E). Recall that for each arc e, I(e) is the set of addresses that labels
the arc e. Let u e be the characteristic sequence of the subset I(e) in
namely, the ith element of u e is 1 if i 2 I(e), and 0 otherwise. It is easy to see
that there is a one-to-one correspondence between the intervals of I(e) and the
blocks of consecutive ones in u e . The number of blocks of consecutive ones in u e
can be seen as the occurence number of 01-sequences 3 in the binary vector u e .
By collecting all the u e 's sequences in order to form a boolean matrix M(R) of
dimensions n \Theta 2jEj, the problem of finding a node-labeling L of G such that
each set I(e) is composed of at most k intervals is equivalent to the problem of
finding a row permutation of M(R) such that every column has at most k blocks
of consecutive ones.
Throughout this section, M denotes a boolean matrix of n rows and p columns.
For every column u of M , and for every row permutation -, we denote by c(u; -)
the number of blocks of consecutive ones in the column u under -. For every
matrix M , define the compactness of M , denoted comp(M ), as the smallest
integer k such that there exists a row permutation - of M satisfying, for every
column u of M , c(u; - k.
The following theorem is the key of the proof of Theorem 1.
Theorem 3 Let M be an n \Theta p boolean let u be a column of
M , and let A u kg be the set of row permutations of M that
provides k blocks of consecutive ones for the column u. Then for every integer k
in the range n=4
jA
Proof. Let us consider a column u of M and an integer k. Let a (respectively, b)
be the number of 0's (resp., 1's) of u. Clearly, if a ! k or b ! k, the theorem holds
because in this case A u Hence suppose a; b - k, with a There
are a! permutations of the rows fx containing 1, and b! permutations
of the rows fy containing 0 in u, and each such pair of permutations
creates a different and disjoint set of permutations in A u (k). Moreover, each of
the a! permutations needs to be broken into k non-empty blocks, which can be
done in
a
ways, and similarly for the b! permutations of the rows fy g.
Each partitioned pair can be merges, alternating a block of 1's and a block of 0's,
in order to yield a permutation in A u (k). Overall, jA u
i a
, and we
need to show that
a!
a
pn
Using Formula (9.91) of [11] on page 481, derived from Stirling's formula, we have
for every n - 1, ' n
e
e
3 If u e does not contain any 0, u e is composed of exactly one block of consecutive ones.
\Gamman 4
2-n
\Gamman 4
From Stirling's bound, for every k in the range
a
' a
s
a
This bound cannot apply for us first handle the extremal cases.
3.1 Inequality (1) holds for
n=2.
Proof. In both cases assumed in the claim, Inequality (1) is equivalent to
pn 3=2
The ratio (n Indeed, in this range
It is thus sufficient to prove Inequality (4) for n=2, in which case it becomes
pn
Using Stirling's bound, (n=2)! 2 ! (n=2) n e \Gamman simplifying with the lower
bound of Inequality (2), we get that to prove Inequality (5) it suffices to prove
This last inequality is satisfied for every n - 1, since
is equivalent to n=2 ! n(ln which is trivial because
This completes the proof of Claim 3.1. 2
For the remainder of the proof, let us assume that k ! a; b. Therefore, it is
possible to apply the bound of Inequality (3), which gives
a!
a
! a a
' a
ab e \Gamman fl 4
3.2 For every integers k, a, b and n such that a and a
ab
Proof. Set
ab
f(a)
Observing that ab - (a suffices to prove that
f(a)
Let us lower bound the term k
f(a). Noting that f(a) is symmetric around the
point n=2, let us assume without loss of generality that a - n=2. In this range
in the desired range f(a) attains its minimum where a
is minimum, and thus k
which is of the same sign that n \Gamma 3k. Hence in
the range first decreases until its minimum at the point n=3,
then increases between n=3 and n=2. So, f 2 Therefore
f(a)
which completes the proof of Claim 3.2. 2
In view of Claim 3.2, Inequality (6) becomes
a!
a
! a a
' a
e \Gamman fl 4 3
Simplifying and applying the lower bound of Inequality (2), we obtain that to
prove Inequality (1) it suffices to show:
a a
' a
pn
Noting that 16
remains to prove that
Assume that k
(pn). The case b - a is dual, and at most doubles the number of
permutations (which is taken in account in the removing of the multiplicative
constant 5.57 in Inequality (7)).
To establish Inequality (7) and complete the proof, it remains only to show the
following lemma.
Lemma 3.3 f(a) ? 0 in the range k
Proof. Write
remains to
prove that f 2 (a) ? 0 in the range k n=2. The first derivative of f 2 is
a
in the range k
Proof. It suffices to show that in the range specified in the claim,
a
or
This is shown by noting that f 3 (a) is increasing in this range, hence its maximum
is attained at the point To show that f 3 (a)
is increasing, we need to show that f 3
range. This is shown by noting that f 3
0 (a) is decreasing in this range, hence its
minimum is attained at the point
To show that f 3
0 (a) is decreasing, we need to show that
this range, which is trivial since a - n=2. This completes
the proof of Claim 3.4. 2
It follows from Claim 3.4 that f 2 (a) is decreasing in this range, and hence its
minimum is attained at a = n=2. Hence in this range,
Consequently, it remains to prove that f 2 (n=2) ? 0 in the desired range. Sim-
plifying, we need to show that k 2k (n \Gamma in the range
we need to prove that
or that
2ff log ff
log (pn)
in the range k 0 =n ! ff ! 1=2 (the function log represents logarithm to base 2).
It remains to prove the
following claim.
log (pn)
n in the range
Proof. Note that k 0
. So, if
thus the range for ff is not empty. Moreover,
In the range 1=4 us show that g 000 (ff) ? 0. This happens
which is trivial since ff ? 1=4. Moreover
2. Thus we have the following bound for g(ff):
So, it suffices to take ff such
log (pn)
s
to complete the proof of Claim 3.5. 2
This completes also the proof of Lemma 3.3, and subsequently of Theorem 3. 2 2
Corollary 3.6 Let M be an n \Theta p boolean
2n
Proof. We need to show that there exists a row permutation - of M , such
that c(u; -)
2n ln (pn) for every column u of M . Let us set
permutation - is said to be "bad" if there exists
a column u of M such that c(u; -) ? k 0 . Let B u be the set of bad permutations
for the column u, i.e.,
A u (k):
The entire set of bad permutations for M is
all the p columns
of M . Theorem 3 implies that for every u,
jB
because It follows that jBj ! n!. Therefore, there is at least
one "good" permutation for the rows of M , i.e., a permutation providing at
most bk 0 c blocks of consecutive ones for each of the columns. We conclude by
remarking that bk 0 c ! k 0 , since ln (pn) cannot be an integer for integer pn ? 1. 2
Proof of Theorem 1: Let us consider any node labeling L of V , and any routing
G, e.g., a single shortest path routing function. Form
the n \Theta p boolean matrix M(R) as explained earlier. By Corollary 3.6 (which
is clearly applicable as there exists a row permutation - such that
2n ln (pn) for every column u of M . Permute the labeling
of the nodes of V according to -, to obtain a labeling L 0 such that the resulting
interval routing scheme, R is a q-IRS for
namely, R 0 has fewer than q intervals on each arc. Let us show that only p - 3n
arcs has to be considered.
In the case of single IRS, each destination is assigned to a unique set I(e)
in each node. For each node of degree three or less, we consider all its outgoing
edges. Consider a node x of degree greater than three, and let I; J; K be the
three largest cardinality sets assigned to outgoing edges of x. Assume that the
nodes are relabeled using the permutation - in such a way that all the sets I; J; K
are composed respectively of i, j, and k intervals. We remark that
3n=4+o(n) by Corollary 3.6. Hence all the other sets share at most n=4 intervals,
and do not need to be considered.
We complete the proof by plugging Inequality (8). 2
Remark. The parameter p of Inequality (8) represents the total number of arcs
we are required to consider. For graphs with fewer edges one can choose
which is better than 3n only for graphs of average degree at most 3. Note that
there exists some 3-regular graphs of compactness \Theta(n) [10].
Here we give another application of Theorem 3.
Corollary 3.7 Let M be an n \Theta p boolean
an arbitrary row permutation of M . With probability at least 1
column u of M .
Proof. Let M be an n \Theta p boolean matrix with . Build from M a
composed of all the p columns of M and completed by (n \Gamma 1)p other
columns, each filled up with 0's. M 0 has dimensions n \Theta pn. Clearly, the set of
"bad" permutations for M 0 and M is the same. The total set of bad permutations
for M 0 is
A
where the union is taken over all the pn columns u of M 0 , and k
Theorem 3 implies that jBj ! n!=n, noting that pn ! e n=2 =n.
We conclude that the number of "good" permutations for M 0 (hence for M ),
i.e., providing at most bk 0 c blocks of consecutive ones for all the columns, is at
least which is a fraction of 1 of all the row permutations of M .
The proof is completed by remarking that bk 0 c ! k 0 , for every integer pn 2 ? 1. 2
Therefore, to have a labeling with fewer than n=4
log n) intervals
on all the edges of G, it suffices to fix a node labeling, and a routing function
on G, then to randomly permute the n labels of nodes by choosing a random
permutation - of ng.
Note that the previous algorithm applies not only to single shortest path
routing schemes, but also to any routing scheme implementable by using interval
routing schemes. Thus for every IRS on every graph we can relabel the nodes in
order to have at most n=4
log n) intervals per arc. It is still unknown
whether there exists a polynomial time deterministic IRS construction algorithm
that guarantees at most n=4 + o(n) intervals per edges.
We do not know whether the upper bound is reached for certain graphs. How-
ever, it is well-known that some small graphs have compactness strictly greater
than n=4. In [9], it is shown that the example depicted on Figure 1 with 7 nodes
and 8 edges, has compactness 2, whereas all graphs of order at most 6 have compactness
1. Note also that the compactness of the Petersen graph is 3, whereas
its order is 10, and its size 15.
4 The Lower Bound
The lower bound idea is based on a representation similar to the one used in the
upper bound, namely, a boolean matrix M representation of the k-IRS on G.
However, this time we need to show that no row permutation of M yields fewer
than k blocks of consecutive ones on all the columns. Furthermore, this must be
shown for every choice of shortest routing paths. For instance, every
grid has compactness 1, using the standard node labeling and single-bend YX-
routing paths. Clearly, a different choice of shortest routing paths would increase
the number of intervals per edge. That is why we use smaller matrices, say of
dimensions jW j \Theta jAj, by considering only a subset of nodes, W , and a subset
of arcs, A, where the shortest paths between the tails of the arcs of A and the
nodes of W are all unique.
Our worst-case graph construction is a function of a boolean matrix M , denoted
GM . For every p \Theta q boolean matrix M , define the graph GM as follows.
For every associate with the ith row of M a vertex v i . For every
associate with the jth column of M a pair of vertices
connected by an edge. In addition, for every
we add to GM an edge connecting v i to a j , and otherwise we connect v i
to b j . Note that the graph obtained from GM by contracting the edges
is a complete bipartite graph K p;q . It is easy to see that the shortest
path from any a j to any v i is unique, and is determined by the entry m i;j of
M .
For integers p; q, let M be the collection of p \Theta q boolean matrices having
bp=2c 1-entries per column. Let M 1 be the subset of matrices of M such that all
the rows are pairwise non complementing, and let M 2 be the subset of matrices
of M such that for every pair of columns the 2 \Theta p matrix composed of the
pair of columns contains the sub-matrix 4 [
to column permutation. We
next use a direct consequence of a result proved recently in [8]. In the following,
means that
Lemma 4.1 (Gavoille, Gengler [8]) Let p; q be two sufficiently large integers.
Throughout the remainder of the paper, we set M . We will
see later that the graphs GM built from the matrices M 2 M 0 have diameter 2
exactly. Furthermore, almost all matrices are in M 0 .
We will see that the compactness of M is a lower bound of the compactness
of GM . Here we give a lower bound of the compactness of matrices of M 0 .
Lemma 4.2 For every sufficiently large integers p; q such that 3 log
there exists a p \Theta q boolean matrix M of M 0 of compactness
Proof. We use a counting argument which can be formalized using Kolmogorov
complexity (see [14] for an introduction). Basically, the Kolmogorov complexity
of an individual object X is the length (in bits) of the smallest program, written
in a fixed programming language, which prints X and halts. A simple counting
argument allows us to argue that no program of length less than K can print a
certain X 0 taken from a set of more than 2 K elements.
Let us begin by showing that the claim of the lemma holds for some matrices
of M. For every M 2 M, we define cl(M) to be the subset of the matrices
of M obtained by row permutation of M . We claim that there exists a matrix
such that all the matrices of cl(M 0 ) have Kolmogorov complexity at
least
there exists a matrix M 0
may be described by an ordered pair (i
is the index of the row
permutation of M 0
. Such an integer can be coded, in a self-delimiting
way, by log(p!) bits. (2dlog pe bits are sufficient to describe p,
thus the length of any i 0 - p!, in a self-delimiting way.) Hence the Kolmogorov
4 A is a sub-matrix of B if A can be obtained from B by removing some columns and rows
in B.
complexity of M 0 is at most C log jMj. By the
counting argument mentioned earlier, it is impossible for all matrices M
to have such low complexity.
The class M is of size
bp=2c
log log p), and log
All the matrices of M have q columns, each one of Kolmogorov complexity
bounded by p +O(1). Therefore there exists a matrix M 0 such that every matrix
in cl(M 0 ) has a column of Kolmogorov complexity at least
O
log p
The term 2 log p codes the length of the description of such a column in a self-delimiting
way. Define a deficiency function as a IN 7! IN function such that it
is possible to retrieve n and ffi(n) from n \Gamma ffi(n) by a self-delimiting program of
constant size. From [14, Theorem 2.15, page 131], every binary string of length
bits and of Kolmogorov complexity at least p \Gamma ffi(p) contains at least
s
occurrences of 01-sequences, for any deficiency function ffi, and some constant
c depending on the definition of the Kolmogorov complexity. Since each 01-
sequence in a binary string necessarily starts a new block of consecutive ones, we
get a lower bound on the number of blocks of consecutive ones for such strings.
By choosing for ffi the function log p), and by
Inequality (11), it follows that M 0 has compactness
Finally, let us show that the result of the lemma, shown for some matrices in
M, holds also for the compactness of some matrices of M 0 . From Lemma 4.1,
because Similarly,
implies that
log log jMj+ o(1), and thus Inequalities (9), (10), (11), and (12) hold for
as well, which completes the proof. 2
Remark. The proof of Lemma 4.2 is nonconstructive. As a result, it can prove
only the existence of such a worst-case graph GM .
We are now ready to prove Theorem 2.
Proof of Theorem 2: Let M 2 M 0 be a matrix satisfying Lemma 4.2, and
consider the graph GM , built from M . Let us show that the diameter of GM
is 2. For any two nodes x; y, denote by dist(x; y) the distance between x and y in
GM . The distance between any a j (or b j ) and any v i is at most 2 (since a j and b j
are adjacent). The fact that M 2 M 1 implies that rows of M are pairwise non
complementing. Thus for every
which implies dist(v 2.
has the following property: for any two columns
there exists some
. Therefore in GM ,
2. It follows that GM is of
diameter 2.
be any interval routing scheme on GM .
4.3 For every arc (a builds a path of length at
least 3.
Proof. Any "wrong" decision of R in routing from a j to v i (meaning, any decision
to start the route from a j to v i on any outgoing arc of a j other than the arc
results in a route that goes through some vertex v i 0
reaches
. The claim now follows from the fact that there is no path shorter than two
hops between any two vertices
. 2
comp(M) be the compactness of M .
Proof. The claim is proved by showing that if there is an IRS R that uses no
more than per arc, then R builds some path of length at least 3.
Since GM is of diameter 2,this implies that R is not a shortest paths scheme.
Given Claim 4.3, it suffices to prove that if there is an IRS R that uses no
more than per arc, then R must make the wrong decision for some
be a column of M composed of at least k blocks of consecutive ones.
Such a column exists because the compactness of M is k. Let us consider the
tuple defined by setting
otherwise, for every
is composed of at most k \Gamma 1
intervals, u is composed of at most k \Gamma 1 blocks of consecutive ones. Thus the
and the tuple u differ in at least one place. Let i 0 be an index such that
. If u
1. Claim 4.4 now follows by applying Claim 4.3.The order of GM is us choose
cn 2=3 ln 1=3 n
, where
1:14. The maximum degree of GM is maxfq; dp=2e
nodes a j are connected to b j and to the v i 's
corresponding to all the 0-entries of the jth column of M ). The total number of
edges in GM is
applying Lemma 4.2, the compactness of M , k, satisfies
Noting that O(n 2 =q
we get
s
Therefore, we have shown that if R uses at most k \Gamma 1 intervals per arc, R
builds a route of length at least 3. It remains to show that this result holds also
if R uses at most IRS(GM
4.5 For every 2-connected graph G of girth g, if k ! IRS(G) then the
longest path of every (non shortest paths) single k-IRS is at least bg=2c + 1.
Proof. Let G be a graph as in the claim, and let R be a single k-IRS for G.
must exist two nodes x; y at distance d such that the
routing specified by R from x to y is not along a shortest path. The routing path
uses an alternative on a cycle between x and y. The length of this alternative
path, l, satisfies l +d - g, which implies that l - g=2, because d - g=2. However
impossible, otherwise the message would use a shortest path, hence
l
Clearly, the graph GM is 2-connected and has no triangles, thus its girth is
at least 4, and therefore any single k-IRS of GM has a routing path of length at
least 3, completing the proof of Claim 4.5. 2
Remark. Theorem 2 is tight for the length of the longest path since it is proven
in [13] that
lp
per arc are sufficient to guarantee routes of
length at most d3D=2e, where D is the diameter of the graph. Hence for the
graphs considered here, which are of diameter 2, this yields paths of length at
most 3. Using this IRS, G cannot have a routing path of length 4.
To our best knowledge, the "best" worst-case construction which does not use
randomization remains that of [9], which yields graphs G with IRS(G) - n=8,
for every n power of 2.
Corollary 4.6 For every sufficiently large integer n, and for every integer D -
there exists an n-node graph G of diameter D such that
Proof. Take the worst-case n-node graph G of Theorem 2. G has diameter 2,
hence it has a node x of eccentricity 2. Construct a new graph G 0 obtained
from G by adding a path of length D \Gamma 2 to x. G 0 has diameter D exactly, and
nodes. The proof of Theorem 2 applies on G 0 as well. It turns
out that G 0 has compactness at least n=4 \Gamma O(n 2=3 log 1=3 n), that is n
replacing n by
We conclude this section by showing that the lower bound can be applied to
k-IRS that are not of shortest paths, and not single routing schemes.
A routing scheme R on G is of stretch factor s if for all nodes x; y, x 6= y,
the routing path length from x to y is at most s times longer than the distance
in G between x and y. In particular, a shortest path k-IRS is a routing scheme
of stretch factor 1.
For every integer ff - 1, a routing scheme R on G is ff-adaptive if for all nodes
y, x 6= y, there exist minfff; ffig edge-disjoint routing paths between x and y,
where ffi is the total number of "possible" edge-disjoint routing paths between
x and y in G having different first edges. A single shortest path k-IRS is a 1-
adaptive routing scheme of stretch factor 1. A full-adaptive k-IRS on G is a
\Delta-adaptive routing scheme on G, where \Delta is the maximum degree of G.
Since for GM the shortest paths between the nodes a j and v i are unique, and
since any wrong decision will route along paths of length at least 3/2 times the
distance, we have the following trivial lower bound.
Corollary 4.7 For every sufficiently large integer n, for every s, 1
and every integer ff - 1, there exists an n-node graph G such that no ff-adaptive
k-IRS of stretch factor s on G exists if
5 Conclusion
ffl Since the lower bound is based on Kolmogorov complexity of the labels of
the edges, the resulting bound can be applied to every kind of edge-labeling
based routing schemes. Moreover, the bounds can apply to adaptive (or
routing schemes.
ffl It would be interesting to find tighter upper bounds for small values of n,
and also to express these bounds as a function of other parameters and
properties of the graphs under study, such as their maximum degree, pla-
narity, genus, tree-width, and so on.
Acknowledgement
We would like to thank Alexander Kostochka.
--R
The complexity of the characterization of networks supporting shortest-path interval routing
in 7 th International Workshop on Distributed Algorithms (WDAG)
schemes, Research Report 94-04
Designing networks with compact routing tables
A survey on interval routing scheme
Worst case bounds for shortest path interval routing
Concrete Mathemat- ics
On multi-label linear interval routing schemes
Compact routing on chordal rings
Characterizations of networks supporting shortest-path interval labeling schemes
Labelling and implicit routing in networks
The Computer Journal
--TR
--CTR
Tamar Eilam , Cyril Gavoille , David Peleg, Compact routing schemes with low stretch factor, Journal of Algorithms, v.46 n.2, p.97-114, February
Cyril Gavoille , Martin Nehz, Interval routing in reliability networks, Theoretical Computer Science, v.333 n.3, p.415-432, 3 March 2005
Cyril Gavoille , Akka Zemmari, The compactness of adaptive routing tables, Journal of Discrete Algorithms, v.1 n.2, p.237-254, April
Pierre Fraigniaud , Cyril Gavoille , Bernard Mans, Interval routing schemes allow broadcasting with linear message-complexity (extended abstract), Proceedings of the nineteenth annual ACM symposium on Principles of distributed computing, p.11-20, July 16-19, 2000, Portland, Oregon, United States
Tamar Eilam , Cyril Gavoille , David Peleg, Average stretch analysis of compact routing schemes, Discrete Applied Mathematics, v.155 n.5, p.598-610, March, 2007 | shortest path;random graphs;compact routing tables;interval routing |